Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820688
Chen-Wei Chang, Min-Hung Chen, Kuan-Chang Chen, Chi-Ming Yeh, Yi-Chang Lu
Pinhole-array-based hand-held light field cameras can be used to capture 4-dimensional light field data for different applications such as digital refocusing and depth estimation. Our previous experiences suggest the design of the pinhole array mask is very critical to the performance of the camera, and the selection of mask parameters could be very different between applications. In this paper, we derive equations for determining the parameters of pinhole masks. The proposed physically-based model can be applied to cameras of different pixel sizes. The experimental results which match the proposed model are also provided at the end of this paper.
{"title":"Mask design for pinhole-array-based hand-held light field cameras with applications in depth estimation","authors":"Chen-Wei Chang, Min-Hung Chen, Kuan-Chang Chen, Chi-Ming Yeh, Yi-Chang Lu","doi":"10.1109/APSIPA.2016.7820688","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820688","url":null,"abstract":"Pinhole-array-based hand-held light field cameras can be used to capture 4-dimensional light field data for different applications such as digital refocusing and depth estimation. Our previous experiences suggest the design of the pinhole array mask is very critical to the performance of the camera, and the selection of mask parameters could be very different between applications. In this paper, we derive equations for determining the parameters of pinhole masks. The proposed physically-based model can be applied to cameras of different pixel sizes. The experimental results which match the proposed model are also provided at the end of this paper.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121208538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820827
Jaeryun Ko, Yo-Sung Ho
The census transform in computing the matching cost of stereo matching is simple and robust under luminance variations in stereo image pairs; however, different disparity maps are generated depending on the shape and size of the census transform window. In this paper, we propose a stereo matching method with variable sizes of census transform windows based on the gradients of stereo images. Our experiment shows higher accuracy of disparity values in the area of depth discontinuities.
{"title":"Stereo matching using census transform of adaptive window sizes with gradient images","authors":"Jaeryun Ko, Yo-Sung Ho","doi":"10.1109/APSIPA.2016.7820827","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820827","url":null,"abstract":"The census transform in computing the matching cost of stereo matching is simple and robust under luminance variations in stereo image pairs; however, different disparity maps are generated depending on the shape and size of the census transform window. In this paper, we propose a stereo matching method with variable sizes of census transform windows based on the gradients of stereo images. Our experiment shows higher accuracy of disparity values in the area of depth discontinuities.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126815594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820714
Yoonmo Yang, Dohoon Lee, Byung Tae Oh
In this paper, we propose a new frame rate up conversion method for multiview video. The proposed method uses the depth map and neighboring view information for the improvement of motion estimation and compensation accuracy. In details, it decomposes a block into multiple layers with depth map. Then it estimates the occluded regions in the lower layer using their neighboring view information, which consequently leads more accurate motion estimation and compensation. The experimental results show that the proposed method highly improves the quality of the interpolated frames compared to the conventional methods.
{"title":"Frame rate up conversion for multiview video","authors":"Yoonmo Yang, Dohoon Lee, Byung Tae Oh","doi":"10.1109/APSIPA.2016.7820714","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820714","url":null,"abstract":"In this paper, we propose a new frame rate up conversion method for multiview video. The proposed method uses the depth map and neighboring view information for the improvement of motion estimation and compensation accuracy. In details, it decomposes a block into multiple layers with depth map. Then it estimates the occluded regions in the lower layer using their neighboring view information, which consequently leads more accurate motion estimation and compensation. The experimental results show that the proposed method highly improves the quality of the interpolated frames compared to the conventional methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127367435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820852
Jonghee Kim, Changick Kim
Dictionary-based super-resolution is actively studied with successful achievements. However, previous dictionary-based super-resolution methods exploit optimization or nearest neighbor search which has high complexity. In this paper, we propose a low-complexity super-resolution method called the discrete feature transform which performs feature extraction and nearest neighbor search at once. As a result, the proposed method achieves the lowest complexity among dictionary-based super-resolution methods with a comparable performance.
{"title":"Discrete feature transform for low-complexity single-image super-resolution","authors":"Jonghee Kim, Changick Kim","doi":"10.1109/APSIPA.2016.7820852","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820852","url":null,"abstract":"Dictionary-based super-resolution is actively studied with successful achievements. However, previous dictionary-based super-resolution methods exploit optimization or nearest neighbor search which has high complexity. In this paper, we propose a low-complexity super-resolution method called the discrete feature transform which performs feature extraction and nearest neighbor search at once. As a result, the proposed method achieves the lowest complexity among dictionary-based super-resolution methods with a comparable performance.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127469234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820704
Yueh-Ting Tsai, B. Su, Yu Tsao, Syu-Siang Wang
Recently, a subspace-constrained diagonal loading (SSC-DL) method has been proposed for robust beamforming against the mismatched direction of arrival (DoA) issue. Although SSC-DL has outstanding output SINR performance, it is not clear how to choose the DL factor and subspace dimension in practice. The aim of the present study is to further investigate conditions on optimal parameters for SSC-DL and algorithms to determine them in realistic test conditions. First, we proposed to use the Capon power spectrum density to determine the desired signal power, which is then used to compute the optimal DL factor for SSC-DL. Next, a novel adaptive SSC-DL approach (adaptive-SSC-DL) is proposed, which can dynamically optimize the sub-space dimension based on the test conditions. Simulation results show that adaptive-SSC-DL provides higher output SINR than several existing methods and achieves comparable performance comparing to SSC-DL with ideal parameter setup.
{"title":"Adaptive subspace-constrained diagonal loading","authors":"Yueh-Ting Tsai, B. Su, Yu Tsao, Syu-Siang Wang","doi":"10.1109/APSIPA.2016.7820704","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820704","url":null,"abstract":"Recently, a subspace-constrained diagonal loading (SSC-DL) method has been proposed for robust beamforming against the mismatched direction of arrival (DoA) issue. Although SSC-DL has outstanding output SINR performance, it is not clear how to choose the DL factor and subspace dimension in practice. The aim of the present study is to further investigate conditions on optimal parameters for SSC-DL and algorithms to determine them in realistic test conditions. First, we proposed to use the Capon power spectrum density to determine the desired signal power, which is then used to compute the optimal DL factor for SSC-DL. Next, a novel adaptive SSC-DL approach (adaptive-SSC-DL) is proposed, which can dynamically optimize the sub-space dimension based on the test conditions. Simulation results show that adaptive-SSC-DL provides higher output SINR than several existing methods and achieves comparable performance comparing to SSC-DL with ideal parameter setup.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129176065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820677
Yuan-sheng Luo, Hong Cheng, Lu Yang
Vessel segmentation of digital retinal images plays an important role in diagnosis of diseases such as diabetics, hypertension and retinopathy of prematurity due to these diseases impact the retina. In this paper, a novel Size-Invariant Fully Convolutional Neural Network (SIFCN) is proposed to address the automatic retinal vessel segmentation problems. The input data of the network is the patches of images and the corresponding pixel-wise labels. A consecutive convolution layers and pooling layers follow the input data, so that the network can learn the abstract features to segment retinal vessel. Our network is designed to hold the height and width of data of each layer with padding and assign pooling stride so that the spatial information maintain and up-sample is not required. Compared with the pixel-wise retinal vessel segmentation approaches, our patch-wise segmentation is much more efficient since in each cycle it can predict all the pixels of the patch. Our overlapped SIFCN approach achieves accuracy of 0.9471, with the AUC of 0.9682. And our non-overlap SIFCN is the most efficient approach among the deep learning approaches, costing only 3.68 seconds per image, and the overlapped SIFCN costs 31.17 seconds per image.
{"title":"Size-Invariant Fully Convolutional Neural Network for vessel segmentation of digital retinal images","authors":"Yuan-sheng Luo, Hong Cheng, Lu Yang","doi":"10.1109/APSIPA.2016.7820677","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820677","url":null,"abstract":"Vessel segmentation of digital retinal images plays an important role in diagnosis of diseases such as diabetics, hypertension and retinopathy of prematurity due to these diseases impact the retina. In this paper, a novel Size-Invariant Fully Convolutional Neural Network (SIFCN) is proposed to address the automatic retinal vessel segmentation problems. The input data of the network is the patches of images and the corresponding pixel-wise labels. A consecutive convolution layers and pooling layers follow the input data, so that the network can learn the abstract features to segment retinal vessel. Our network is designed to hold the height and width of data of each layer with padding and assign pooling stride so that the spatial information maintain and up-sample is not required. Compared with the pixel-wise retinal vessel segmentation approaches, our patch-wise segmentation is much more efficient since in each cycle it can predict all the pixels of the patch. Our overlapped SIFCN approach achieves accuracy of 0.9471, with the AUC of 0.9682. And our non-overlap SIFCN is the most efficient approach among the deep learning approaches, costing only 3.68 seconds per image, and the overlapped SIFCN costs 31.17 seconds per image.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130364872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820877
Xinyu Shen, Chunyu Lin, Yao Zhao, Hongyun Lin, Meiqin Liu
Saliency detection as an image preprocessing has been widely used in many applications such as image segmentation. Since most images stored in DCT domain, we propose an effective saliency detection algorithm, which is mainly based on DCT and secondary quantization. Firstly, the DC coefficient and the first five AC coefficients are used to get the color saliency map. Then, through secondary quantization of a JPEG image, we can obtain the difference of the original image and the quantified image, from which we can get the texture saliency map. Next, considering the center bias theory, the center region is easier to catch people's attention. And then the band-pass filter is used to simulate the behavior that the human visual system detects the salient region. Finally, the final saliency map is generated based on these two maps and two priorities. Experimental results on two datasets show that the proposed method can accurately detect the saliency regions and outperformed existing methods.
{"title":"Saliency detection using secondary quantization in DCT domain","authors":"Xinyu Shen, Chunyu Lin, Yao Zhao, Hongyun Lin, Meiqin Liu","doi":"10.1109/APSIPA.2016.7820877","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820877","url":null,"abstract":"Saliency detection as an image preprocessing has been widely used in many applications such as image segmentation. Since most images stored in DCT domain, we propose an effective saliency detection algorithm, which is mainly based on DCT and secondary quantization. Firstly, the DC coefficient and the first five AC coefficients are used to get the color saliency map. Then, through secondary quantization of a JPEG image, we can obtain the difference of the original image and the quantified image, from which we can get the texture saliency map. Next, considering the center bias theory, the center region is easier to catch people's attention. And then the band-pass filter is used to simulate the behavior that the human visual system detects the salient region. Finally, the final saliency map is generated based on these two maps and two priorities. Experimental results on two datasets show that the proposed method can accurately detect the saliency regions and outperformed existing methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130666618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820829
M. S. Shakeel, K. Lam
In this paper, we propose a new approach for recognition of low-resolution face images by using sparse coding of local features. The proposed algorithm extracts Gabor features from a low-resolution gallery image and a query image at different scales and orientations, then projects the features separately into a new low-dimensional feature space using sparse coding that preserves the sparse structure of the local features. To determine the similarity between the projected features, a coefficient vector is estimated by using linear regression that determines the relationship between the projected gallery and query features. On the basis of this coefficient vector, residual values will be computed to classify the images. To validate our proposed method, experiments were performed using three databases (ORL, Extended-Yale B, and CAS-PEAL-R1), which contain images with different facial expressions and lighting conditions. Experimental results show that our method outperforms various classical and state-of-the-art face recognition methods.
本文提出了一种基于局部特征稀疏编码的低分辨率人脸图像识别新方法。该算法从不同尺度和方向的低分辨率图库图像和查询图像中提取Gabor特征,然后使用稀疏编码将特征分别投影到新的低维特征空间中,保留了局部特征的稀疏结构。为了确定投影特征之间的相似性,使用线性回归来估计系数向量,该系数向量确定投影图库和查询特征之间的关系。在此系数向量的基础上,计算残差值对图像进行分类。为了验证我们提出的方法,使用三个数据库(ORL, Extended-Yale B和cas - pearl - r1)进行了实验,这些数据库包含不同面部表情和光照条件的图像。实验结果表明,该方法优于各种经典和先进的人脸识别方法。
{"title":"Recognition of low-resolution face images using sparse coding of local features","authors":"M. S. Shakeel, K. Lam","doi":"10.1109/APSIPA.2016.7820829","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820829","url":null,"abstract":"In this paper, we propose a new approach for recognition of low-resolution face images by using sparse coding of local features. The proposed algorithm extracts Gabor features from a low-resolution gallery image and a query image at different scales and orientations, then projects the features separately into a new low-dimensional feature space using sparse coding that preserves the sparse structure of the local features. To determine the similarity between the projected features, a coefficient vector is estimated by using linear regression that determines the relationship between the projected gallery and query features. On the basis of this coefficient vector, residual values will be computed to classify the images. To validate our proposed method, experiments were performed using three databases (ORL, Extended-Yale B, and CAS-PEAL-R1), which contain images with different facial expressions and lighting conditions. Experimental results show that our method outperforms various classical and state-of-the-art face recognition methods.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130683806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820902
K. Nishikawa, Yoshihiro Yamashita, Toru Yamaguchi, T. Nishitani
Wearable devices are expected to provide a ubiquitous network connection in the near future. In this paper, we consider systems which uses human finger gestures as an input device. To assure accurate input characteristics, the shape of arm and fingers should be captured clearly, and for that purpose we consider using the Gaussian mixture model (GMM) foreground segmentation. It is known that shadow or reflection in the frame image affects the performance of GMM foreground segmentation. A low computational shadow or reflection removal methods are proposed [1]-[3] which are suitable to be implemented in wearable devices. Although the methods improve the foreground segmentation performance, the results depend on the characteristics of the video. In this paper, we consider improving the performance of the methods by modifying the equation for deciding the shadow region. Through the computer simulations, we show the effectiveness of the proposed method.
{"title":"Consideration on performance improvement of shadow and reflection removal based on GMM","authors":"K. Nishikawa, Yoshihiro Yamashita, Toru Yamaguchi, T. Nishitani","doi":"10.1109/APSIPA.2016.7820902","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820902","url":null,"abstract":"Wearable devices are expected to provide a ubiquitous network connection in the near future. In this paper, we consider systems which uses human finger gestures as an input device. To assure accurate input characteristics, the shape of arm and fingers should be captured clearly, and for that purpose we consider using the Gaussian mixture model (GMM) foreground segmentation. It is known that shadow or reflection in the frame image affects the performance of GMM foreground segmentation. A low computational shadow or reflection removal methods are proposed [1]-[3] which are suitable to be implemented in wearable devices. Although the methods improve the foreground segmentation performance, the results depend on the characteristics of the video. In this paper, we consider improving the performance of the methods by modifying the equation for deciding the shadow region. Through the computer simulations, we show the effectiveness of the proposed method.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116982543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-01DOI: 10.1109/APSIPA.2016.7820866
Songnan Li, Fanzi Wu, Tianhao Zhao, Ran Shi, K. Ngan
A facial expression model (FEM) is developed which can synthesize various face shapes and albedo textures. The face shape varies with individuals and expressions. FEM synthesizes these shape variations by using a bilinear face model built from the Face Warehouse Database. On the other hand, the generative albedo texture is directly extracted from a neutral face model — the Basel Face Model. In this paper, we elaborate the model construction process and demonstrate its application in face reconstruction and expression tracking.
{"title":"A facial expression model with generative albedo texture","authors":"Songnan Li, Fanzi Wu, Tianhao Zhao, Ran Shi, K. Ngan","doi":"10.1109/APSIPA.2016.7820866","DOIUrl":"https://doi.org/10.1109/APSIPA.2016.7820866","url":null,"abstract":"A facial expression model (FEM) is developed which can synthesize various face shapes and albedo textures. The face shape varies with individuals and expressions. FEM synthesizes these shape variations by using a bilinear face model built from the Face Warehouse Database. On the other hand, the generative albedo texture is directly extracted from a neutral face model — the Basel Face Model. In this paper, we elaborate the model construction process and demonstrate its application in face reconstruction and expression tracking.","PeriodicalId":409448,"journal":{"name":"2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131958865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}