Pub Date : 2017-06-16DOI: 10.1109/ICASSP.2017.7953215
T. Kinnunen, Lauri Juvela, P. Alku, J. Yamagishi
Text-independent speaker verification (recognizing speakers regardless of content) and non-parallel voice conversion (transforming voice identities without requiring content-matched training utterances) are related problems. We adopt i-vector method to voice conversion. An i-vector is a fixed-dimensional representation of a speech utterance that enables treating voice conversion in utterance domain, as opposed to frame domain. The high dimensionality (800) and small number of training utterances (24) necessitates using prior information of speakers. We adopt probabilistic linear discriminant analysis (PLDA) for voice conversion. The proposed approach requires neither parallel utterances, transcriptions nor time alignment procedures at any stage.
{"title":"Non-parallel voice conversion using i-vector PLDA: towards unifying speaker verification and transformation","authors":"T. Kinnunen, Lauri Juvela, P. Alku, J. Yamagishi","doi":"10.1109/ICASSP.2017.7953215","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7953215","url":null,"abstract":"Text-independent speaker verification (recognizing speakers regardless of content) and non-parallel voice conversion (transforming voice identities without requiring content-matched training utterances) are related problems. We adopt i-vector method to voice conversion. An i-vector is a fixed-dimensional representation of a speech utterance that enables treating voice conversion in utterance domain, as opposed to frame domain. The high dimensionality (800) and small number of training utterances (24) necessitates using prior information of speakers. We adopt probabilistic linear discriminant analysis (PLDA) for voice conversion. The proposed approach requires neither parallel utterances, transcriptions nor time alignment procedures at any stage.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121958130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-16DOI: 10.1109/ICASSP.2017.7953254
Md. Akmal Haidar, M. Kurimo
Adding context information into recurrent neural network language models (RNNLMs) have been investigated recently to improve the effectiveness of learning RNNLM. Conventionally, a fast approximate topic representation for a block of words was proposed by using corpus-based topic distribution of word incorporating latent Dirichlet allocation (LDA) model. It is then updated for each subsequent word using an exponential decay. However, words could represent different topics in different documents. In this paper, we form document-based distribution over topics for each word using LDA model and apply it in the computation of fast approximate exponentially decaying features. We have shown experimental results on a well known Penn Treebank corpus and found that our approach outperforms the conventional LDA-based context RNNLM approach. Moreover, we carried out speech recognition experiments on Wall Street Journal corpus and achieved word error rate (WER) improvements over the other approach.
{"title":"LDA-based context dependent recurrent neural network language model using document-based topic distribution of words","authors":"Md. Akmal Haidar, M. Kurimo","doi":"10.1109/ICASSP.2017.7953254","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7953254","url":null,"abstract":"Adding context information into recurrent neural network language models (RNNLMs) have been investigated recently to improve the effectiveness of learning RNNLM. Conventionally, a fast approximate topic representation for a block of words was proposed by using corpus-based topic distribution of word incorporating latent Dirichlet allocation (LDA) model. It is then updated for each subsequent word using an exponential decay. However, words could represent different topics in different documents. In this paper, we form document-based distribution over topics for each word using LDA model and apply it in the computation of fast approximate exponentially decaying features. We have shown experimental results on a well known Penn Treebank corpus and found that our approach outperforms the conventional LDA-based context RNNLM approach. Moreover, we carried out speech recognition experiments on Wall Street Journal corpus and achieved word error rate (WER) improvements over the other approach.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126524057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-16DOI: 10.1109/ICASSP.2017.7953391
C. Glackin, G. Chollet, Nazim Dugan, Nigel Cannings, J. Wall, Shahzaib Tahir, I. G. Ray, M. Rajarajan
This paper presents a strategy for enabling speech recognition to be performed in the cloud whilst preserving the privacy of users. The approach advocates a demarcation of responsibilities between the client and server-side components for performing the speech recognition task. On the client-side resides the acoustic model, which symbolically encodes the audio and encrypts the data before uploading to the server. The server-side then employs searchable encryption to enable the phonetic search of the speech content. Some preliminary results for speech encoding and searchable encryption are presented.
{"title":"Privacy preserving encrypted phonetic search of speech data","authors":"C. Glackin, G. Chollet, Nazim Dugan, Nigel Cannings, J. Wall, Shahzaib Tahir, I. G. Ray, M. Rajarajan","doi":"10.1109/ICASSP.2017.7953391","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7953391","url":null,"abstract":"This paper presents a strategy for enabling speech recognition to be performed in the cloud whilst preserving the privacy of users. The approach advocates a demarcation of responsibilities between the client and server-side components for performing the speech recognition task. On the client-side resides the acoustic model, which symbolically encodes the audio and encrypts the data before uploading to the server. The server-side then employs searchable encryption to enable the phonetic search of the speech content. Some preliminary results for speech encoding and searchable encryption are presented.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132204169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-16DOI: 10.1109/ICASSP.2017.7953133
Kun-Yi Huang, Chung-Hsien Wu, Ming-Hsiang Su, Hsiang-Chi Fu
In current studies, an extended subjective self-report method is generally used for measuring emotions. Even though it is commonly accepted that speech emotion perceived by the listener is close to the intended emotion conveyed by the speaker, research has indicated that there still remains a mismatch between them. In addition, the individuals with different personalities generally have different emotion expressions. Based on the investigation, in this study, a support vector machine (SVM)-based emotion model is first developed to detect perceived emotion from daily conversational speech. Then, a denoising autoencoder (DAE) is used to construct an emotion conversion model to characterize the relationship between the perceived emotion and the expressed emotion of the subject for a specific personality. Finally, a long short-term memory (LSTM)-based mood model is constructed to model the temporal fluctuation of speech emotions for mood detection. Experimental results show that the proposed method achieved a detection accuracy of 64.5%, improving by 5.0% compared to the HMM-based method.
{"title":"Mood detection from daily conversational speech using denoising autoencoder and LSTM","authors":"Kun-Yi Huang, Chung-Hsien Wu, Ming-Hsiang Su, Hsiang-Chi Fu","doi":"10.1109/ICASSP.2017.7953133","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7953133","url":null,"abstract":"In current studies, an extended subjective self-report method is generally used for measuring emotions. Even though it is commonly accepted that speech emotion perceived by the listener is close to the intended emotion conveyed by the speaker, research has indicated that there still remains a mismatch between them. In addition, the individuals with different personalities generally have different emotion expressions. Based on the investigation, in this study, a support vector machine (SVM)-based emotion model is first developed to detect perceived emotion from daily conversational speech. Then, a denoising autoencoder (DAE) is used to construct an emotion conversion model to characterize the relationship between the perceived emotion and the expressed emotion of the subject for a specific personality. Finally, a long short-term memory (LSTM)-based mood model is constructed to model the temporal fluctuation of speech emotions for mood detection. Experimental results show that the proposed method achieved a detection accuracy of 64.5%, improving by 5.0% compared to the HMM-based method.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126606150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-19DOI: 10.1109/ICASSP.2017.7952247
L. Villemoes, T. Hirvonen, H. Purnhagen
Object-based representations of audio content are increasingly used in entertainment systems to deliver immersive and personalized experiences. Efficient storage and transmission of such content can be achieved by joint object coding algorithms that convey a reduced number of downmix signals together with parametric side information that enables object reconstruction in the decoder. This paper presents an approach to improve the performance of joint object coding by adding one or more decorrelators to the decoding process. Listening test results illustrate the performance as a function of the number of decorrelators. The method is adopted as part of the Dolby AC-4 system standardized by ETSI.
{"title":"Decorrelation for audio object coding","authors":"L. Villemoes, T. Hirvonen, H. Purnhagen","doi":"10.1109/ICASSP.2017.7952247","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7952247","url":null,"abstract":"Object-based representations of audio content are increasingly used in entertainment systems to deliver immersive and personalized experiences. Efficient storage and transmission of such content can be achieved by joint object coding algorithms that convey a reduced number of downmix signals together with parametric side information that enables object reconstruction in the decoder. This paper presents an approach to improve the performance of joint object coding by adding one or more decorrelators to the decoding process. Listening test results illustrate the performance as a function of the number of decorrelators. The method is adopted as part of the Dolby AC-4 system standardized by ETSI.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127200330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-19DOI: 10.1109/ICASSP.2017.7953308
Alireza Ahrabian, Ş. Kolozali, Shirin Enshaeifar, C. C. Took, P. Barnaghi
The advent of Internet of Things, has resulted in the development of infrastructure for capturing and storing data from domains ranging from smart devices (e.g. smartphones) to smart cities. This data is often available publicly and has enabled a wider range of data consumers to utilise such data sets for applications ranging from scientific experimentation to enhancing commercial activity for businesses. Accordingly this has resulted in the need for the development data analysis tools that are both simple to use and provide the most effective tools for a given data set. To this end, we introduce data analysis tools as web service, that enables the data consumer to make a simple HTTP request for processing data over the internet. By providing such tools as a web service, we demonstrate the potential of such a system to aid both the advanced and novice data consumer. Furthermore, this work provides an use case example of the proposed tool on publicly available data extracted from the smart city CityPulse IoT project.
{"title":"Data analysis as a web service: A case study using IoT sensor data","authors":"Alireza Ahrabian, Ş. Kolozali, Shirin Enshaeifar, C. C. Took, P. Barnaghi","doi":"10.1109/ICASSP.2017.7953308","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7953308","url":null,"abstract":"The advent of Internet of Things, has resulted in the development of infrastructure for capturing and storing data from domains ranging from smart devices (e.g. smartphones) to smart cities. This data is often available publicly and has enabled a wider range of data consumers to utilise such data sets for applications ranging from scientific experimentation to enhancing commercial activity for businesses. Accordingly this has resulted in the need for the development data analysis tools that are both simple to use and provide the most effective tools for a given data set. To this end, we introduce data analysis tools as web service, that enables the data consumer to make a simple HTTP request for processing data over the internet. By providing such tools as a web service, we demonstrate the potential of such a system to aid both the advanced and novice data consumer. Furthermore, this work provides an use case example of the proposed tool on publicly available data extracted from the smart city CityPulse IoT project.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115029498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-08DOI: 10.1109/ICASSP.2017.7952439
Wenli He, Guorong Cai, Zhun Zhong, Songzhi Su
Road detection is a key component of Advanced Driving Assistance Systems, which provides valid space and candidate regions of objects for vehicles. Mainstream road detection methods have focused on extracting discriminative features. In this paper, we propose a robust feature fusion framework, called “Feature++”, which is combined with superpixel feature and 3D feature extracted from stereo images. Then a neural network classifier is been trained to decide whether a superpixel is road region or not. Finally, the classified results are further refined by conditional random field. Experiments conducted on the KITTI ROAD benchmark show that the proposed “Feature++” method outperforms most manually designed features, and are comparable with state-of-the-art methods that based on deep learning architecture.
{"title":"Feature++: Cross dimension feature fusion for road detection","authors":"Wenli He, Guorong Cai, Zhun Zhong, Songzhi Su","doi":"10.1109/ICASSP.2017.7952439","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7952439","url":null,"abstract":"Road detection is a key component of Advanced Driving Assistance Systems, which provides valid space and candidate regions of objects for vehicles. Mainstream road detection methods have focused on extracting discriminative features. In this paper, we propose a robust feature fusion framework, called “Feature++”, which is combined with superpixel feature and 3D feature extracted from stereo images. Then a neural network classifier is been trained to decide whether a superpixel is road region or not. Finally, the classified results are further refined by conditional random field. Experiments conducted on the KITTI ROAD benchmark show that the proposed “Feature++” method outperforms most manually designed features, and are comparable with state-of-the-art methods that based on deep learning architecture.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128927182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-12DOI: 10.1109/ICASSP.2017.8005304
M. Deisher, A. Polonski
In recent years many signal processing applications involving classification, detection, and inference have enjoyed substantial accuracy improvements due to advances in deep learning. At the same time, the “Internet of Things” has become an important class of devices. Although the paradigm of local sensing and remote inference has been very successful (e.g., Apple Siri, Google Now, Microsoft Cortana, Amazon Alexa, and others) there exist many valuable applications where sensing duration is very long, the cost of communication is high, and scaling to millions or billions of devices is not practical. In such cases, local inference “at the edge” is attractive provided it can be done without compromising accuracy and within the thermal envelope and expected battery life of the edge device.
近年来,由于深度学习的进步,许多涉及分类、检测和推理的信号处理应用都有了实质性的准确性提高。与此同时,“物联网”已成为设备的重要类别。虽然本地感知和远程推理的范例已经非常成功(例如,Apple Siri, Google Now, Microsoft Cortana, Amazon Alexa等),但仍然存在许多有价值的应用,其中感知持续时间非常长,通信成本高,并且扩展到数百万或数十亿设备是不现实的。在这种情况下,“在边缘”的局部推断是有吸引力的,只要它可以在不影响精度的情况下完成,并且在边缘设备的热包络和预期电池寿命内完成。
{"title":"Implementation of efficient, low power deep neural networks on next-generation intel client platforms","authors":"M. Deisher, A. Polonski","doi":"10.1109/ICASSP.2017.8005304","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.8005304","url":null,"abstract":"In recent years many signal processing applications involving classification, detection, and inference have enjoyed substantial accuracy improvements due to advances in deep learning. At the same time, the “Internet of Things” has become an important class of devices. Although the paradigm of local sensing and remote inference has been very successful (e.g., Apple Siri, Google Now, Microsoft Cortana, Amazon Alexa, and others) there exist many valuable applications where sensing duration is very long, the cost of communication is high, and scaling to millions or billions of devices is not practical. In such cases, local inference “at the edge” is attractive provided it can be done without compromising accuracy and within the thermal envelope and expected battery life of the edge device.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115178851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-05DOI: 10.1109/ICASSP.2017.7952127
Youssef El Baba, A. Walther, Emanuël Habets
Echo labeling, the challenging task of assigning acoustic reflections to image sources, is equivalent to the highly-important disambiguation task in room geometry inference. A method using the Radon transform, an image processing tool, is proposed to address this challenge. The method relies on acoustic wavefront detection in room impulse response stacks, obtained with a uniform linear array of loudspeakers and one microphone. We show in our experiments that the proposed method can both label and detect echoes.
{"title":"Time of arrival disambiguation using the linear Radon transform","authors":"Youssef El Baba, A. Walther, Emanuël Habets","doi":"10.1109/ICASSP.2017.7952127","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7952127","url":null,"abstract":"Echo labeling, the challenging task of assigning acoustic reflections to image sources, is equivalent to the highly-important disambiguation task in room geometry inference. A method using the Radon transform, an image processing tool, is proposed to address this challenge. The method relies on acoustic wavefront detection in room impulse response stacks, obtained with a uniform linear array of loudspeakers and one microphone. We show in our experiments that the proposed method can both label and detect echoes.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-04DOI: 10.1109/ICASSP.2017.7952407
Hana Gharbi, S. Bahroun, M. Massaoudi, E. Zagrouba
Keyframe extraction is one of the basic procedures relating to video retrieval and summary. It consists on presenting an abstract of the video with the most representative frames. This paper presents an efficient keyframe extraction approach based on local description and graph modularity clustering. The first step is to generate a set of candidate keyframes using a windowing rule in order to reduce the data to be examined. After that, detect interest points in these set of images. Then compute repeatability between each two images belonging to the candidate set and stocks these values in a matrix that we called repeatability matrix. Finally, the repeatability matrix is modelled by an oriented graph and we will select keyframes using graph modularity clustering principle. The experiments showed that this method succeeds in extracting keyframes while preserving the salient content of the video. Further, we found good values in term of precision, PSNR and compression rate.
{"title":"Key frames extraction using graph modularity clustering for efficient video summarization","authors":"Hana Gharbi, S. Bahroun, M. Massaoudi, E. Zagrouba","doi":"10.1109/ICASSP.2017.7952407","DOIUrl":"https://doi.org/10.1109/ICASSP.2017.7952407","url":null,"abstract":"Keyframe extraction is one of the basic procedures relating to video retrieval and summary. It consists on presenting an abstract of the video with the most representative frames. This paper presents an efficient keyframe extraction approach based on local description and graph modularity clustering. The first step is to generate a set of candidate keyframes using a windowing rule in order to reduce the data to be examined. After that, detect interest points in these set of images. Then compute repeatability between each two images belonging to the candidate set and stocks these values in a matrix that we called repeatability matrix. Finally, the repeatability matrix is modelled by an oriented graph and we will select keyframes using graph modularity clustering principle. The experiments showed that this method succeeds in extracting keyframes while preserving the salient content of the video. Further, we found good values in term of precision, PSNR and compression rate.","PeriodicalId":118243,"journal":{"name":"2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127370419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}