Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518841
Murat Akbacak, D. Vergyri, A. Stolcke
We address the problem of retrieving out-of-vocabulary (OOV) words/queries from audio archives for spoken term detection (STD) task. Many STD systems use the output of an automatic speech recognition (ASR) system which has a limited and fixed vocabulary, and are not capable of detecting rare words of high information content, such as named entities. Since such words are often of great interest for a retrieval task it is important to index spoken archives in a way that allows a user to search an OOV query/term.1 In this work, we employ hybrid recognition systems which contain both words and subword units (graphones) to generate hybrid lattice indexes. We use a word-based STD system as our baseline, and present improvements by employing our proposed hybrid STD system that uses words plus graphones on the English broadcast news genre of the 2006 NIST STD task.
{"title":"Open-vocabulary spoken term detection using graphone-based hybrid recognition systems","authors":"Murat Akbacak, D. Vergyri, A. Stolcke","doi":"10.1109/ICASSP.2008.4518841","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518841","url":null,"abstract":"We address the problem of retrieving out-of-vocabulary (OOV) words/queries from audio archives for spoken term detection (STD) task. Many STD systems use the output of an automatic speech recognition (ASR) system which has a limited and fixed vocabulary, and are not capable of detecting rare words of high information content, such as named entities. Since such words are often of great interest for a retrieval task it is important to index spoken archives in a way that allows a user to search an OOV query/term.1 In this work, we employ hybrid recognition systems which contain both words and subword units (graphones) to generate hybrid lattice indexes. We use a word-based STD system as our baseline, and present improvements by employing our proposed hybrid STD system that uses words plus graphones on the English broadcast news genre of the 2006 NIST STD task.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129807638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518534
Thi Minh Nguyet Hoang, M. Oger, S. Ragot, M. Antonini
This paper proposes a new model-based method for transform coding of audio signals. The input signal is mapped in "perceptual" domain by linear-predictive weighting filter followed by modified discrete cosine transform (MDCT). To provide bitstream scalability, model-based bit plane coding is then applied with respect to the mean square error (MSE) criterion. We present methods to estimate the symbol probability in bit planes assuming a generalized Gaussian model for the distribution of MDCT coefficients. We compare the performance of the proposed bitstream scalable coder with stack-run coding and ITU-T G.722.1. Objective and subjective quality results are presented. The proposed coder is equivalent to or slightly worse than reference coders, but presents the nice advantage of being scalable. Performance penalty due to bitstream scalability is evident at low bitrates.
{"title":"Embedded transform coding of audio signals by model-based bit plane coding","authors":"Thi Minh Nguyet Hoang, M. Oger, S. Ragot, M. Antonini","doi":"10.1109/ICASSP.2008.4518534","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518534","url":null,"abstract":"This paper proposes a new model-based method for transform coding of audio signals. The input signal is mapped in \"perceptual\" domain by linear-predictive weighting filter followed by modified discrete cosine transform (MDCT). To provide bitstream scalability, model-based bit plane coding is then applied with respect to the mean square error (MSE) criterion. We present methods to estimate the symbol probability in bit planes assuming a generalized Gaussian model for the distribution of MDCT coefficients. We compare the performance of the proposed bitstream scalable coder with stack-run coding and ITU-T G.722.1. Objective and subjective quality results are presented. The proposed coder is equivalent to or slightly worse than reference coders, but presents the nice advantage of being scalable. Performance penalty due to bitstream scalability is evident at low bitrates.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128324618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518103
J. Ash, R. Moses
In sensor network self-localization, anchor nodes provide a convenient means to disambiguate scene translation and rotation, thereby affording estimates in an absolute coordinate system. However, localization performance depends on the positions of the anchor nodes relative to the unknown-location nodes. Conventional wisdom in the literature is that anchor nodes should be placed around the perimeter of the network. In this paper, we show analytically why this strategy works well universally. We demonstrate that perimeter placement forces the information provided by the anchor constraints to closely align with the subspace that cannot be estimated from inter-node measurements: the subspace of translations and rotations. Examples quantify the efficacy of perimeter placement of anchors.
{"title":"On optimal anchor node placement in sensor localization by optimization of subspace principal angles","authors":"J. Ash, R. Moses","doi":"10.1109/ICASSP.2008.4518103","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518103","url":null,"abstract":"In sensor network self-localization, anchor nodes provide a convenient means to disambiguate scene translation and rotation, thereby affording estimates in an absolute coordinate system. However, localization performance depends on the positions of the anchor nodes relative to the unknown-location nodes. Conventional wisdom in the literature is that anchor nodes should be placed around the perimeter of the network. In this paper, we show analytically why this strategy works well universally. We demonstrate that perimeter placement forces the information provided by the anchor constraints to closely align with the subspace that cannot be estimated from inter-node measurements: the subspace of translations and rotations. Examples quantify the efficacy of perimeter placement of anchors.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128413645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4517544
Lie Lu, A. Hanjalic
Reliably measuring similarity between audio clips is critical to many applications. As opposed to the conventional way of measuring audio similarity using low-level features directly, in this paper we consider the similarity computation using an anchor space. Each dimension of such a space corresponds to a semantic category (anchor). Mapping an audio clip onto this space results in a vector, which indicates the membership probability of this audio clip with respect to each semantic category. The more similar the mappings of two audio clips, the more similar they are. While an anchor space is typically generated in a supervised fashion, supervised approach is infeasible in many realistic scenarios where audio content semantics is too diverse or simply unknown a priori. We therefore propose an unsupervised approach to anchor space generation. There, spectral clustering is employed to cluster the audio clips with similar low-level features and then the obtained clusters are adopted as semantic categories. Using this semantic space for audio similarity computation shows a considerable accuracy improvement (7% on mAP) in an audio retrieval system, compared with the conventional low-level feature based approach.
{"title":"Unsupervised anchor space generation for similarity measurement of general audio","authors":"Lie Lu, A. Hanjalic","doi":"10.1109/ICASSP.2008.4517544","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4517544","url":null,"abstract":"Reliably measuring similarity between audio clips is critical to many applications. As opposed to the conventional way of measuring audio similarity using low-level features directly, in this paper we consider the similarity computation using an anchor space. Each dimension of such a space corresponds to a semantic category (anchor). Mapping an audio clip onto this space results in a vector, which indicates the membership probability of this audio clip with respect to each semantic category. The more similar the mappings of two audio clips, the more similar they are. While an anchor space is typically generated in a supervised fashion, supervised approach is infeasible in many realistic scenarios where audio content semantics is too diverse or simply unknown a priori. We therefore propose an unsupervised approach to anchor space generation. There, spectral clustering is employed to cluster the audio clips with similar low-level features and then the obtained clusters are adopted as semantic categories. Using this semantic space for audio similarity computation shows a considerable accuracy improvement (7% on mAP) in an audio retrieval system, compared with the conventional low-level feature based approach.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518677
Takuya Yoshioka, T. Nakatani, T. Hikichi, M. Miyoshi
This paper proposes a speech enhancement method for signals contaminated by room reverberation and additive background noise. The following conditions are assumed: (1) The spectral components of speech and noise are statistically independent Gaussian random variables. (2) The convolutive distortion channel is modeled as an auto-regressive system in each frequency bin. (3) The power spectral density of speech is modeled as an all-pole spectrum, while that of noise is assumed to be stationary and given in advance. Under these conditions, the proposed method estimates the parameters of the channel and those of the all-pole speech model based on the maximum likelihood estimation method. Experimental results showed that the proposed method successfully suppressed the reverberation and additive noise from three-second noisy reverberant signals when the reverberation time was 0.5 seconds and the reverberant signal to noise ratio was 10 dB.
{"title":"Maximum likelihood approach to speech enhancement for noisy reverberant signals","authors":"Takuya Yoshioka, T. Nakatani, T. Hikichi, M. Miyoshi","doi":"10.1109/ICASSP.2008.4518677","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518677","url":null,"abstract":"This paper proposes a speech enhancement method for signals contaminated by room reverberation and additive background noise. The following conditions are assumed: (1) The spectral components of speech and noise are statistically independent Gaussian random variables. (2) The convolutive distortion channel is modeled as an auto-regressive system in each frequency bin. (3) The power spectral density of speech is modeled as an all-pole spectrum, while that of noise is assumed to be stationary and given in advance. Under these conditions, the proposed method estimates the parameters of the channel and those of the all-pole speech model based on the maximum likelihood estimation method. Experimental results showed that the proposed method successfully suppressed the reverberation and additive noise from three-second noisy reverberant signals when the reverberation time was 0.5 seconds and the reverberant signal to noise ratio was 10 dB.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129333924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4517972
Yan Gao, Ming Yang, Xiaonan Zhao, Bryan Pardo, Ying Wu, T. Pappas, A. Choudhary
Spammers are constantly creating sophisticated new weapons in their arms race with anti-spam technology, the latest of which is image-based spam. The newest image-based spam uses simple image processing technologies to vary the content of individual messages, e.g. by changing foreground colors, backgrounds, font types, or even rotating and adding artifacts to the images. Thus, they pose great challenges to conventional spam filters. In this paper, we propose a system using a probabilistic boosting tree to determine whether an incoming image is a spam or not based on global image features, i.e. color and gradient orientation histograms. The system identifies spam without the need for OCR and is robust in the face of the kinds of variation found in current spam images. Evaluation results show the system correctly classifies 90% of spam images while mislabeling only 0.86% of non-spam images as spam.
{"title":"Image spam hunter","authors":"Yan Gao, Ming Yang, Xiaonan Zhao, Bryan Pardo, Ying Wu, T. Pappas, A. Choudhary","doi":"10.1109/ICASSP.2008.4517972","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4517972","url":null,"abstract":"Spammers are constantly creating sophisticated new weapons in their arms race with anti-spam technology, the latest of which is image-based spam. The newest image-based spam uses simple image processing technologies to vary the content of individual messages, e.g. by changing foreground colors, backgrounds, font types, or even rotating and adding artifacts to the images. Thus, they pose great challenges to conventional spam filters. In this paper, we propose a system using a probabilistic boosting tree to determine whether an incoming image is a spam or not based on global image features, i.e. color and gradient orientation histograms. The system identifies spam without the need for OCR and is robust in the face of the kinds of variation found in current spam images. Evaluation results show the system correctly classifies 90% of spam images while mislabeling only 0.86% of non-spam images as spam.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129335767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518487
K. Wagner, M. Doroslovački
To date no theoretical results have been developed to predict the performance of the proportionate normalized least mean square (PNLMS) algorithm or any of its cousin algorithms such as the mu-law PNLMS (MPNLMS), and the e-law PNLMS (EPNLMS). In this paper we develop an analytic approach to predicting the performance of the simplified PNLMS algorithm which is closely related to the PNLMS algorithm. In particular we demonstrate the ability to predict the mean square output error of the simplified PNLMS algorithm using our theory.
{"title":"Towards analytical convergence analysis of proportionate-type nlms algorithms","authors":"K. Wagner, M. Doroslovački","doi":"10.1109/ICASSP.2008.4518487","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518487","url":null,"abstract":"To date no theoretical results have been developed to predict the performance of the proportionate normalized least mean square (PNLMS) algorithm or any of its cousin algorithms such as the mu-law PNLMS (MPNLMS), and the e-law PNLMS (EPNLMS). In this paper we develop an analytic approach to predicting the performance of the simplified PNLMS algorithm which is closely related to the PNLMS algorithm. In particular we demonstrate the ability to predict the mean square output error of the simplified PNLMS algorithm using our theory.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129369193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518318
M. Ding, S. Blostein
Joint linear minimum sum mean-squared error (referred to as MSMSE) transmitter and receiver (transceiver) optimization problems are formulated for multiuser MIMO systems under a sum power constraint assuming imperfect channel state information (CSI). Both the uplink and the dual downlink are considered. Based on the Karush-Kuhn-Tucker (KKT) conditions associated with both problems, a relation between the two problems is discovered, which is termed the uplink-downlink duality in sum MSE under imperfect CSI. As a result, the MSMSEs in both links are the same and any admissible uplink design satisfying the KKT conditions can be translated for application to the downlink, and vice versa. Simulation results are provided to demonstrate the duality and show the impact of imperfect CSI.
{"title":"Relation between joint optimizations for multiuser MIMO uplink and downlink with imperfect CSI","authors":"M. Ding, S. Blostein","doi":"10.1109/ICASSP.2008.4518318","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518318","url":null,"abstract":"Joint linear minimum sum mean-squared error (referred to as MSMSE) transmitter and receiver (transceiver) optimization problems are formulated for multiuser MIMO systems under a sum power constraint assuming imperfect channel state information (CSI). Both the uplink and the dual downlink are considered. Based on the Karush-Kuhn-Tucker (KKT) conditions associated with both problems, a relation between the two problems is discovered, which is termed the uplink-downlink duality in sum MSE under imperfect CSI. As a result, the MSMSEs in both links are the same and any admissible uplink design satisfying the KKT conditions can be translated for application to the downlink, and vice versa. Simulation results are provided to demonstrate the duality and show the impact of imperfect CSI.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125163669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518725
Shaminda Subasingha, M. Murthi, S. Andersen
Gaussian mixture model (GMM)-based predictive coding of line spectral frequencies (LSFs) has gained wide acceptance. In such coders, each mixture of a GMM can be interpreted as defining a linear predictive transform coder. In this paper we optimize each of these linear predictive transform coders using Kalman predictive coding techniques to present GMM Kalman predictive coding. In particular, we show how suitable modeling of quantization noise leads to an adaptive a-posteriori GMM that defines a signal-adaptive predictive coder that provides superior coding of LSFs in comparison with the baseline GMM predictive coder. Moreover, we show how running the Kalman predictive coders to convergence can be used to design a stationary predictive coding system which again provides superior coding of LSFs but now with no increase in run-time complexity over the baseline.
{"title":"Gaussian Mixture Kalman predictive coding of LSFS","authors":"Shaminda Subasingha, M. Murthi, S. Andersen","doi":"10.1109/ICASSP.2008.4518725","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518725","url":null,"abstract":"Gaussian mixture model (GMM)-based predictive coding of line spectral frequencies (LSFs) has gained wide acceptance. In such coders, each mixture of a GMM can be interpreted as defining a linear predictive transform coder. In this paper we optimize each of these linear predictive transform coders using Kalman predictive coding techniques to present GMM Kalman predictive coding. In particular, we show how suitable modeling of quantization noise leads to an adaptive a-posteriori GMM that defines a signal-adaptive predictive coder that provides superior coding of LSFs in comparison with the baseline GMM predictive coder. Moreover, we show how running the Kalman predictive coders to convergence can be used to design a stationary predictive coding system which again provides superior coding of LSFs but now with no increase in run-time complexity over the baseline.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129661739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-12DOI: 10.1109/ICASSP.2008.4518600
Shizhen Wang, A. Alwan, Steven M. Lulich
Speaker normalization typically focuses on variabilities of the supra-glottal (vocal tract) resonances, which constitute a major cause of spectral mismatch. Recent studies show that the subglottal airways also affect spectral properties of speech sounds. This paper presents a speaker normalization method based on estimating the second and third subglottal resonances. Since the subglottal airways do not change for a specific speaker, the subglottal resonances are independent of the sound type (i.e., vowel, consonant, etc.) and remain constant for a given speaker. This context-free property makes the proposed method suitable for limited data speaker adaptation. This method is computationally more efficient than maximum-likelihood based VTLN, with performance better than VTLN especially for limited adaptation data. Experimental results confirm that this method performs well in a variety of testing conditions and tasks.
{"title":"Speaker normalization based on subglottal resonances","authors":"Shizhen Wang, A. Alwan, Steven M. Lulich","doi":"10.1109/ICASSP.2008.4518600","DOIUrl":"https://doi.org/10.1109/ICASSP.2008.4518600","url":null,"abstract":"Speaker normalization typically focuses on variabilities of the supra-glottal (vocal tract) resonances, which constitute a major cause of spectral mismatch. Recent studies show that the subglottal airways also affect spectral properties of speech sounds. This paper presents a speaker normalization method based on estimating the second and third subglottal resonances. Since the subglottal airways do not change for a specific speaker, the subglottal resonances are independent of the sound type (i.e., vowel, consonant, etc.) and remain constant for a given speaker. This context-free property makes the proposed method suitable for limited data speaker adaptation. This method is computationally more efficient than maximum-likelihood based VTLN, with performance better than VTLN especially for limited adaptation data. Experimental results confirm that this method performs well in a variety of testing conditions and tasks.","PeriodicalId":333742,"journal":{"name":"2008 IEEE International Conference on Acoustics, Speech and Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2008-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129685469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}