Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820712
Ran Shi, K. Ngan, Songnan Li
In this paper, we propose a novel switch scheme and a saliency map binarization method for salient object segmentation. With the proposed switch scheme, the saliency map can be segmented by different methods according to its quality, which is evaluated by a method proposed in this paper. We also develop a binarization method by integrating three properties of the salient object. This method exclusively derives information from the saliency map (i.e., without referring to the original image). Experimental results demonstrate that the proposed binarization method can generate better segmentation results and the switch scheme can further improve the segmentation results by fully exploiting the merit of both segmentation methods.
{"title":"Salient object segmentation using a switch scheme","authors":"Ran Shi, K. Ngan, Songnan Li","doi":"10.1109/APSIPA.2016.7820712","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820712","url":null,"abstract":"In this paper, we propose a novel switch scheme and a saliency map binarization method for salient object segmentation. With the proposed switch scheme, the saliency map can be segmented by different methods according to its quality, which is evaluated by a method proposed in this paper. We also develop a binarization method by integrating three properties of the salient object. This method exclusively derives information from the saliency map (i.e., without referring to the original image). Experimental results demonstrate that the proposed binarization method can generate better segmentation results and the switch scheme can further improve the segmentation results by fully exploiting the merit of both segmentation methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"430 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116143580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820857
Wei-Yi Chang, Yi-Ren Yeh, Y. Wang
While the task of visual summarization aims to select representative images from an image collection, we solve a unique problem of style-oriented landmark retrieval and summarization from photographic images of a city. Instead of performing summarization or clustering on landmark images from a city, we allow the user to provide a query input which is not from the city of interest, while the goal is to retrieve and summarize the landmark images with similar style-dependent landmark images, followed by a style-consistent image summarization across landmark categories. As a result, our summarized outputs from various landmarks would exhibit similar image style as that of the query. Our experiments will confirm that the use of our proposed method is able to perform favorably against existing or baseline approaches with improved query-dependent style consistency.
{"title":"Style-oriented landmark retrieval and summarization","authors":"Wei-Yi Chang, Yi-Ren Yeh, Y. Wang","doi":"10.1109/APSIPA.2016.7820857","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820857","url":null,"abstract":"While the task of visual summarization aims to select representative images from an image collection, we solve a unique problem of style-oriented landmark retrieval and summarization from photographic images of a city. Instead of performing summarization or clustering on landmark images from a city, we allow the user to provide a query input which is not from the city of interest, while the goal is to retrieve and summarize the landmark images with similar style-dependent landmark images, followed by a style-consistent image summarization across landmark categories. As a result, our summarized outputs from various landmarks would exhibit similar image style as that of the query. Our experiments will confirm that the use of our proposed method is able to perform favorably against existing or baseline approaches with improved query-dependent style consistency.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"300 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114384609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820749
Chih-Hsiang You, Chen-Kuo Chiang
In this paper, a novel Dynamic Convolutional Neural Network (D-CNN) is proposed using sensor data for activity recognition. Sensor data collected for activity recognition is usually not well-aligned. It may also contains noises and variations from different persons. To overcome these challenges, Gaussian Mixture Models (GMM) is exploited to capture the distribution of each activity. Then, sensor data and the GMMs are screened into different segments. These segments form multiple paths in the Convolutional Neural Network. During testing, Gaussian Mixture Regression (GMR) is applied to dynamically fit segments of test signals into corresponding paths in the CNN. Experimental results demonstrate the superior performance of D-CNN to other learning methods.
{"title":"Dynamic convolutional neural network for activity recognition","authors":"Chih-Hsiang You, Chen-Kuo Chiang","doi":"10.1109/APSIPA.2016.7820749","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820749","url":null,"abstract":"In this paper, a novel Dynamic Convolutional Neural Network (D-CNN) is proposed using sensor data for activity recognition. Sensor data collected for activity recognition is usually not well-aligned. It may also contains noises and variations from different persons. To overcome these challenges, Gaussian Mixture Models (GMM) is exploited to capture the distribution of each activity. Then, sensor data and the GMMs are screened into different segments. These segments form multiple paths in the Convolutional Neural Network. During testing, Gaussian Mixture Regression (GMR) is applied to dynamically fit segments of test signals into corresponding paths in the CNN. Experimental results demonstrate the superior performance of D-CNN to other learning methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127539619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820758
Yuanman Li, Jiantao Zhou
Copy-move forgery is one of the most commonly used manipulations for tempering digital images. Keypoint-based detection methods have been reported to be very effective in revealing copy-move evidences, due to their robustness against geometric transforms. However, these methods fail to handle the cases when copy-move forgery only involves small or smooth regions, where the number of keypoints is very limited. To tackle this challenge, we propose a simple yet effective copy-move forgery detection approach. By lowering the contrast threshold and rescaling the input image, we first generate a sufficient number of keypoints that exist even in the small or smooth regions. Then, a novel hierarchical matching strategy is developed for solving the keypoint matching problems. Finally, a novel iterative homography estimation technique is suggested through exploiting the dominant orientation information of each keypoint. Extensive experimental results are provided to demonstrate the superior performance of the proposed scheme.
{"title":"Image copy-move forgery detection using hierarchical feature point matching","authors":"Yuanman Li, Jiantao Zhou","doi":"10.1109/APSIPA.2016.7820758","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820758","url":null,"abstract":"Copy-move forgery is one of the most commonly used manipulations for tempering digital images. Keypoint-based detection methods have been reported to be very effective in revealing copy-move evidences, due to their robustness against geometric transforms. However, these methods fail to handle the cases when copy-move forgery only involves small or smooth regions, where the number of keypoints is very limited. To tackle this challenge, we propose a simple yet effective copy-move forgery detection approach. By lowering the contrast threshold and rescaling the input image, we first generate a sufficient number of keypoints that exist even in the small or smooth regions. Then, a novel hierarchical matching strategy is developed for solving the keypoint matching problems. Finally, a novel iterative homography estimation technique is suggested through exploiting the dominant orientation information of each keypoint. Extensive experimental results are provided to demonstrate the superior performance of the proposed scheme.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125344320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820782
Nancy F. Chen, Haizhou Li
This paper reviews the research approaches used in computer-assisted pronunciation training (CAPT), addresses the existing challenges, and discusses emerging trends and opportunities. To complement existing work, our analysis places more emphasis on pronunciation teaching and learning (as opposed to pronunciation assessment), prosodic error detection (as opposed to phonetic error detection), and research work from the past five years given the recent rapid development in spoken language technology.
{"title":"Computer-assisted pronunciation training: From pronunciation scoring towards spoken language learning","authors":"Nancy F. Chen, Haizhou Li","doi":"10.1109/APSIPA.2016.7820782","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820782","url":null,"abstract":"This paper reviews the research approaches used in computer-assisted pronunciation training (CAPT), addresses the existing challenges, and discusses emerging trends and opportunities. To complement existing work, our analysis places more emphasis on pronunciation teaching and learning (as opposed to pronunciation assessment), prosodic error detection (as opposed to phonetic error detection), and research work from the past five years given the recent rapid development in spoken language technology.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127038199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820882
Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, H. Wang
Representation learning has emerged as a newly active research subject in many machine learning applications because of its excellent performance. In the context of natural language processing, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as information retrieval and document summarization. However, as far as we are aware, there is only a dearth of research focusing on launching paragraph embedding methods. Extractive spoken document summarization, which can help us browse and digest multimedia data efficiently, aims at selecting a set of indicative sentences from a source document to express the most important theme of the document. A general consensus is that relevance and redundancy are both critical issues in a realistic summarization scenario. However, most of the existing methods focus on determining only the relevance degree between a pair of sentence and document. Motivated by these observations, three major contributions are proposed in this paper. First, we propose a novel unsupervised paragraph embedding method, named the essence vector model, which aims at not only distilling the most representative information from a paragraph but also getting rid of the general background information to produce a more informative low-dimensional vector representation. Second, we incorporate the deduced essence vectors with a density peaks clustering summarization method, which can take both relevance and redundancy information into account simultaneously, to enhance the spoken document summarization performance. Third, the effectiveness of our proposed methods over several well-practiced and state-of-the-art methods is confirmed by extensive spoken document summarization experiments.
{"title":"A novel paragraph embedding method for spoken document summarization","authors":"Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, H. Wang","doi":"10.1109/APSIPA.2016.7820882","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820882","url":null,"abstract":"Representation learning has emerged as a newly active research subject in many machine learning applications because of its excellent performance. In the context of natural language processing, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as information retrieval and document summarization. However, as far as we are aware, there is only a dearth of research focusing on launching paragraph embedding methods. Extractive spoken document summarization, which can help us browse and digest multimedia data efficiently, aims at selecting a set of indicative sentences from a source document to express the most important theme of the document. A general consensus is that relevance and redundancy are both critical issues in a realistic summarization scenario. However, most of the existing methods focus on determining only the relevance degree between a pair of sentence and document. Motivated by these observations, three major contributions are proposed in this paper. First, we propose a novel unsupervised paragraph embedding method, named the essence vector model, which aims at not only distilling the most representative information from a paragraph but also getting rid of the general background information to produce a more informative low-dimensional vector representation. Second, we incorporate the deduced essence vectors with a density peaks clustering summarization method, which can take both relevance and redundancy information into account simultaneously, to enhance the spoken document summarization performance. Third, the effectiveness of our proposed methods over several well-practiced and state-of-the-art methods is confirmed by extensive spoken document summarization experiments.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128122445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820889
Jun-Hyuk Kim, Jong-Seok Lee
Photo album summarization refers to the process of choosing a representative subset of photos in a photo album. In this paper, we propose a novel system capable of automatic photo album summarization based on three fundamental criteria, namely, aesthetic quality, interestingness, and memorableness. Based on these criteria, steps for filtering and scoring photos are designed. Through an experiment with photo albums of different sizes, it is demonstrated that the proposed system works well consistently.
{"title":"Travel photo album summarization based on aesthetic quality, interestingness, and memorableness","authors":"Jun-Hyuk Kim, Jong-Seok Lee","doi":"10.1109/APSIPA.2016.7820889","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820889","url":null,"abstract":"Photo album summarization refers to the process of choosing a representative subset of photos in a photo album. In this paper, we propose a novel system capable of automatic photo album summarization based on three fundamental criteria, namely, aesthetic quality, interestingness, and memorableness. Based on these criteria, steps for filtering and scoring photos are designed. Through an experiment with photo albums of different sizes, it is demonstrated that the proposed system works well consistently.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128160268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820840
Woo Kyeong Seong, Nam Kyun Kim, H. Ha, H. Kim
While dysarthric speech recognition can be a convenient interface for dysarthric speakers, it is hard to collect enough speech data to overcome the underestimation problem of acoustic models. In addition, there are lots of pronunciation variations in the collected database due to the paralysis of the articulator of dysarthric speakers. Thus, a discriminative training method is proposed for improving the performance of such resource-limited dysarthric speech recognition. The proposed method is applied to subspace Gaussian mixture modeling by incorporating pronunciation variations into a conventional minimum phone error discriminative training method.
{"title":"A discriminative training method incorporating pronunciation variations for dysarthric automatic speech recognition","authors":"Woo Kyeong Seong, Nam Kyun Kim, H. Ha, H. Kim","doi":"10.1109/APSIPA.2016.7820840","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820840","url":null,"abstract":"While dysarthric speech recognition can be a convenient interface for dysarthric speakers, it is hard to collect enough speech data to overcome the underestimation problem of acoustic models. In addition, there are lots of pronunciation variations in the collected database due to the paralysis of the articulator of dysarthric speakers. Thus, a discriminative training method is proposed for improving the performance of such resource-limited dysarthric speech recognition. The proposed method is applied to subspace Gaussian mixture modeling by incorporating pronunciation variations into a conventional minimum phone error discriminative training method.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125218903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820779
Naoki Morikawa, Toshihisa Tanaka
A brain-computer interface (BCI) based on steady state visual evoked potentials (SSVEPs) is one of the most practical BCI, because of high recognition accuracies and short time training. To increase the number of commands of SSVEP-based BCI, recently a frequency and phase mixed-coded SSVEP BCI has been proposed. However, in order to detect frequency and phase of SSVEPs accurately, it is required to treat multi-channel phases to select useful channels for detecting commands. In this paper, we propose a novel method for estimating both frequency and phase of SSVEPs with sparse complex spatial filters. We conducted experiments for evaluating the performance of the proposed method in a mixed-coded SSVEP based BCI. As a result, the proposed method showed higher recognition accuracies and lower calculation cost of command detection than conventional methods. Moreover, the proposed method achieved automatic channel selection.
{"title":"Sparse spatial filtering in frequency domain of multi-channel EEG for frequency and phase detection","authors":"Naoki Morikawa, Toshihisa Tanaka","doi":"10.1109/APSIPA.2016.7820779","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820779","url":null,"abstract":"A brain-computer interface (BCI) based on steady state visual evoked potentials (SSVEPs) is one of the most practical BCI, because of high recognition accuracies and short time training. To increase the number of commands of SSVEP-based BCI, recently a frequency and phase mixed-coded SSVEP BCI has been proposed. However, in order to detect frequency and phase of SSVEPs accurately, it is required to treat multi-channel phases to select useful channels for detecting commands. In this paper, we propose a novel method for estimating both frequency and phase of SSVEPs with sparse complex spatial filters. We conducted experiments for evaluating the performance of the proposed method in a mixed-coded SSVEP based BCI. As a result, the proposed method showed higher recognition accuracies and lower calculation cost of command detection than conventional methods. Moreover, the proposed method achieved automatic channel selection.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122092336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820864
Shumin Liu, Jiajia Chen
To comprise the advantages of both the spatial domain and transform domain methods, this paper presents a novel hybrid algorithm for multi-focus images fusion, which reduces the error rate of sub-band coefficients selection in the transform domain and reduce the artificial discontinuities created in the spatial domain algorithms. In this method, wavelet transforms are firstly performed on each input image, and a focused region decision map is established based on the high-frequency sub-bands extraction. The fusion rules are then guided by this map, and the fused coefficients are transformed back to form the fused image. Experimental results demonstrate that the proposed method is better than various existing methods, in term of fusion quality benchmarks. In addition, the proposed algorithm has a complexity proportional to the total number of pixels in the image, which is lower than some other algorithm which may produce similar fusion quality with the proposed algorithm. Furthermore, the proposed algorithm only requires one level wavelet decomposition, again reducing the processing time. With the proposed method, high quality and fast multi-focus image fusion is made possible.
{"title":"A fast multi-focus image fusion algorithm by DWT and focused region decision map","authors":"Shumin Liu, Jiajia Chen","doi":"10.1109/APSIPA.2016.7820864","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820864","url":null,"abstract":"To comprise the advantages of both the spatial domain and transform domain methods, this paper presents a novel hybrid algorithm for multi-focus images fusion, which reduces the error rate of sub-band coefficients selection in the transform domain and reduce the artificial discontinuities created in the spatial domain algorithms. In this method, wavelet transforms are firstly performed on each input image, and a focused region decision map is established based on the high-frequency sub-bands extraction. The fusion rules are then guided by this map, and the fused coefficients are transformed back to form the fused image. Experimental results demonstrate that the proposed method is better than various existing methods, in term of fusion quality benchmarks. In addition, the proposed algorithm has a complexity proportional to the total number of pixels in the image, which is lower than some other algorithm which may produce similar fusion quality with the proposed algorithm. Furthermore, the proposed algorithm only requires one level wavelet decomposition, again reducing the processing time. With the proposed method, high quality and fast multi-focus image fusion is made possible.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123233314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}