Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177457
P. Freitas, Mylène C. Q. Farias, Aleteia P. F. Araujo
Inverse halftoning techniques are known to introduce visible distortions (typically, blurring or noise) into the reconstructed image. To reduce the severity of these distortions, we propose a novel training approach for inverse halftoning algorithms. The proposed technique uses a coupled dictionary (CD) to match distorted and original images via a sparse representation. This technique enforces similarities of sparse representations between distorted and non-distorted images. Results show that the proposed technique can improve the performance of different inverse halftone approaches. Images reconstructed with the proposed approach have a higher quality, showing less blur, noise, and chromatic aberrations.
{"title":"Improved performance of inverse halftoning algorithms via coupled dictionaries","authors":"P. Freitas, Mylène C. Q. Farias, Aleteia P. F. Araujo","doi":"10.1109/ICME.2015.7177457","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177457","url":null,"abstract":"Inverse halftoning techniques are known to introduce visible distortions (typically, blurring or noise) into the reconstructed image. To reduce the severity of these distortions, we propose a novel training approach for inverse halftoning algorithms. The proposed technique uses a coupled dictionary (CD) to match distorted and original images via a sparse representation. This technique enforces similarities of sparse representations between distorted and non-distorted images. Results show that the proposed technique can improve the performance of different inverse halftone approaches. Images reconstructed with the proposed approach have a higher quality, showing less blur, noise, and chromatic aberrations.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131741481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel framework to detect abnormal behaviors in surveillance videos by using fuzzy clustering and multiple Auto-Encoders (FMAE). As detecting abnormal behaviors is often treated as an unsupervised task, how to describe normal patterns becomes the key point. Considering there are many types of normal behaviors in the daily life, we use the fuzzy clustering technique to roughly divide the training samples into several clusters so that each cluster stands for a normal pattern. Then we deploy multiple Auto-Encoders to estimate these different types of normal behaviors from weighted samples. When testing on an unknown video, our framework can predict whether it contains abnormal behaviors or not by summarizing the reconstruction cost through each Auto-Encoder. Since there are always lots of redundancies in the surveillance video, Auto-Encoder is a pretty good tool to capture common structures of normal video sequences automatically as well as estimate normal patterns. The experimental results show that our approach achieves good performance on three public video analysis datasets and statistically outperforms the state-of-the-art approaches under some scenes.
{"title":"Detecting abnormal behaviors in surveillance videos based on fuzzy clustering and multiple Auto-Encoders","authors":"Zhengying Chen, Yonghong Tian, Wei Zeng, Tiejun Huang","doi":"10.1109/ICME.2015.7177459","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177459","url":null,"abstract":"In this paper, we present a novel framework to detect abnormal behaviors in surveillance videos by using fuzzy clustering and multiple Auto-Encoders (FMAE). As detecting abnormal behaviors is often treated as an unsupervised task, how to describe normal patterns becomes the key point. Considering there are many types of normal behaviors in the daily life, we use the fuzzy clustering technique to roughly divide the training samples into several clusters so that each cluster stands for a normal pattern. Then we deploy multiple Auto-Encoders to estimate these different types of normal behaviors from weighted samples. When testing on an unknown video, our framework can predict whether it contains abnormal behaviors or not by summarizing the reconstruction cost through each Auto-Encoder. Since there are always lots of redundancies in the surveillance video, Auto-Encoder is a pretty good tool to capture common structures of normal video sequences automatically as well as estimate normal patterns. The experimental results show that our approach achieves good performance on three public video analysis datasets and statistically outperforms the state-of-the-art approaches under some scenes.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126640475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177394
Yuta Nakashima, Tatsuya Koyama, N. Yokoya, N. Babaguchi
An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are filtered while the others are not, missing facial expression makes comprehension of the image difficult. This paper proposes an image melding-based method that modifies facial regions in a visually unintrusive way with preserving facial expression. Our experimental results demonstrated that the proposed method can retain facial expression while protecting privacy.
{"title":"Facial expression preserving privacy protection using image melding","authors":"Yuta Nakashima, Tatsuya Koyama, N. Yokoya, N. Babaguchi","doi":"10.1109/ICME.2015.7177394","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177394","url":null,"abstract":"An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are filtered while the others are not, missing facial expression makes comprehension of the image difficult. This paper proposes an image melding-based method that modifies facial regions in a visually unintrusive way with preserving facial expression. Our experimental results demonstrated that the proposed method can retain facial expression while protecting privacy.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177512
Fei Peng, Xiaolong Li, Bin Yang
Prediction-error expansion (PEE) is an efficient technique for reversible data hiding (RDH). Instead of expanding the highest histogram bins in conventional PEE, in this paper, to better utilize the image redundancy, we propose a new PEE-based RDH scheme with an advisable expansion strategy utilizing referential prediction-errors. For each pixel, we first calculate its prediction-error and use its neighbor prediction-error as a reference. The correlation of the prediction-error and its reference is exploited to adaptively select bins for expansion embedding. In addition, to further enhance the reversible embedding performance, we apply the pixel selection technique in our scheme such that the pixels located in smooth image areas are priorly embedded. Experimental results show that the proposed scheme outperforms conventional PEE and it is better than some state-of-the-art RDH works as well.
{"title":"An adaptive PEE-based reversible data hiding scheme exploiting referential prediction-errors","authors":"Fei Peng, Xiaolong Li, Bin Yang","doi":"10.1109/ICME.2015.7177512","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177512","url":null,"abstract":"Prediction-error expansion (PEE) is an efficient technique for reversible data hiding (RDH). Instead of expanding the highest histogram bins in conventional PEE, in this paper, to better utilize the image redundancy, we propose a new PEE-based RDH scheme with an advisable expansion strategy utilizing referential prediction-errors. For each pixel, we first calculate its prediction-error and use its neighbor prediction-error as a reference. The correlation of the prediction-error and its reference is exploited to adaptively select bins for expansion embedding. In addition, to further enhance the reversible embedding performance, we apply the pixel selection technique in our scheme such that the pixels located in smooth image areas are priorly embedded. Experimental results show that the proposed scheme outperforms conventional PEE and it is better than some state-of-the-art RDH works as well.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128452532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177522
Chanyoung Oh, Saehanseul Yi, Youngmin Yi
CPU-GPU heterogeneous systems have become a mainstream platform in both server and embedded domains with ever increasing demand for powerful accelerator. In this paper, we present parallelization techniques that exploit both data and task parallelism of LBP based face detection algorithm on an embedded heterogeneous platform. By running tasks in a pipelined parallel way on multicore CPUs and by offloading a data-parallel task to a GPU, we could successfully achieve 29 fps for Full HD inputs on Tegra K1 platform where quad-core Cortex-A15 CPU and CUDA supported 192-core GPU are integrated. This corresponds to 5.54x speedup over a sequential version and 1.69x speedup compared to the GPU-only implementations.
{"title":"Real-time face detection in Full HD images exploiting both embedded CPU and GPU","authors":"Chanyoung Oh, Saehanseul Yi, Youngmin Yi","doi":"10.1109/ICME.2015.7177522","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177522","url":null,"abstract":"CPU-GPU heterogeneous systems have become a mainstream platform in both server and embedded domains with ever increasing demand for powerful accelerator. In this paper, we present parallelization techniques that exploit both data and task parallelism of LBP based face detection algorithm on an embedded heterogeneous platform. By running tasks in a pipelined parallel way on multicore CPUs and by offloading a data-parallel task to a GPU, we could successfully achieve 29 fps for Full HD inputs on Tegra K1 platform where quad-core Cortex-A15 CPU and CUDA supported 192-core GPU are integrated. This corresponds to 5.54x speedup over a sequential version and 1.69x speedup compared to the GPU-only implementations.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127228583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177418
Yu-Lin Chien, K. Lin, Ming-Syan Chen
Dynamic Adaptive Streaming over HTTP (DASH) has become an emerging application nowadays. Video rate adaptation is a key to determine the video quality of HTTP-based media streaming. Recent works have proposed several algorithms that allow a DASH client to adapt its video encoding rate to network dynamics. While network conditions are typically affected by many different factors, these algorithms however usually consider only a few representative information, e.g., predicted available bandwidth or fullness of its playback buffer. In addition, the error in bandwidth estimation could significantly degrade their performance. Therefore, this paper presents Machine Learning-based Adaptive Streaming over HTTP (MLASH), an elastic framework that exploits a wide range of useful network-related features to train a rate classification model. The distinct properties of MLASH are that its machine learning-based framework can be incorporated with any existing adaptation algorithm and utilize big data characteristics to improve prediction accuracy. We show via trace-based simulations that machine learning-based adaptation can achieve a better performance than traditional adaptation algorithms in terms of their target quality of experience (QoE) metrics.
{"title":"Machine learning based rate adaptation with elastic feature selection for HTTP-based streaming","authors":"Yu-Lin Chien, K. Lin, Ming-Syan Chen","doi":"10.1109/ICME.2015.7177418","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177418","url":null,"abstract":"Dynamic Adaptive Streaming over HTTP (DASH) has become an emerging application nowadays. Video rate adaptation is a key to determine the video quality of HTTP-based media streaming. Recent works have proposed several algorithms that allow a DASH client to adapt its video encoding rate to network dynamics. While network conditions are typically affected by many different factors, these algorithms however usually consider only a few representative information, e.g., predicted available bandwidth or fullness of its playback buffer. In addition, the error in bandwidth estimation could significantly degrade their performance. Therefore, this paper presents Machine Learning-based Adaptive Streaming over HTTP (MLASH), an elastic framework that exploits a wide range of useful network-related features to train a rate classification model. The distinct properties of MLASH are that its machine learning-based framework can be incorporated with any existing adaptation algorithm and utilize big data characteristics to improve prediction accuracy. We show via trace-based simulations that machine learning-based adaptation can achieve a better performance than traditional adaptation algorithms in terms of their target quality of experience (QoE) metrics.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131693217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177409
Wenzhao Li, Yi-Zhe Song, A. Cavallaro
We present a graph matching refinement framework that improves the performance of a given graph matching algorithm. Our method synergistically uses the inherent structure information embedded globally in the active association graph, and locally on each individual graph. The combination of such information reveals how consistent each candidate match is with its global and local contexts. In doing so, the proposed method removes most false matches and improves precision. The validation on standard benchmark datasets demonstrates the effectiveness of our method.
{"title":"Refining graph matching using inherent structure information","authors":"Wenzhao Li, Yi-Zhe Song, A. Cavallaro","doi":"10.1109/ICME.2015.7177409","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177409","url":null,"abstract":"We present a graph matching refinement framework that improves the performance of a given graph matching algorithm. Our method synergistically uses the inherent structure information embedded globally in the active association graph, and locally on each individual graph. The combination of such information reveals how consistent each candidate match is with its global and local contexts. In doing so, the proposed method removes most false matches and improves precision. The validation on standard benchmark datasets demonstrates the effectiveness of our method.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132893828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177396
Xing Xu, Atsushi Shimada, R. Taniguchi, Li He
In this paper, we investigate the problem of modeling images and associated text for cross-modal retrieval tasks such as text-to-image search and image-to-text search. To make the data from image and text modalities comparable, previous cross-modal retrieval methods directly learn two projection matrices to map the raw features of the two modalities into a common subspace, in which cross-modal data matching can be performed. However, the different feature representations and correlation structures of different modalities inhibit these methods from efficiently modeling the relationships across modalities through a common subspace. To handle the diversities of different modalities, we first leverage the coupled dictionary learning method to generate homogeneous sparse representations for different modalities by associating and jointly updating their dictionaries. We then use a coupled feature mapping scheme to project the derived sparse representations from different modalities into a common subspace in which cross-modal retrieval can be performed. Experiments on a variety of cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art approaches.
{"title":"Coupled dictionary learning and feature mapping for cross-modal retrieval","authors":"Xing Xu, Atsushi Shimada, R. Taniguchi, Li He","doi":"10.1109/ICME.2015.7177396","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177396","url":null,"abstract":"In this paper, we investigate the problem of modeling images and associated text for cross-modal retrieval tasks such as text-to-image search and image-to-text search. To make the data from image and text modalities comparable, previous cross-modal retrieval methods directly learn two projection matrices to map the raw features of the two modalities into a common subspace, in which cross-modal data matching can be performed. However, the different feature representations and correlation structures of different modalities inhibit these methods from efficiently modeling the relationships across modalities through a common subspace. To handle the diversities of different modalities, we first leverage the coupled dictionary learning method to generate homogeneous sparse representations for different modalities by associating and jointly updating their dictionaries. We then use a coupled feature mapping scheme to project the derived sparse representations from different modalities into a common subspace in which cross-modal retrieval can be performed. Experiments on a variety of cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art approaches.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177386
Yangcheng He, Hongtao Lu, Bao-Liang Lu
Chen et al. proposed a non-negative local coordinate factorization algorithm for feature extraction (NLCF) [1], which incorporated the local coordinate constraint into non-negative matrix factorization (NMF). However, NLCF is actually a unsupervised method without making use of prior information of problems in hand. In this paper, we propose a novel graph regularized non-negative local coordinate factorization with pairwise constraints algorithm (PCGNLCF) for image representation. PCGNLCF incorporates pairwise constraints and graph Laplacian into NLCF. More specifically, we expect that data points having pairwise must-link constraints will have the similar coordinates as much as possible, while data points with pairwise cannot-link constraints will have distinct coordinates as much as possible. Experimental results show the effectiveness of our proposed method in comparison to the state-of-the-art algorithms on several real-world applications.
{"title":"Graph regularized non-negative local coordinate factorization with pairwise constraints for image representation","authors":"Yangcheng He, Hongtao Lu, Bao-Liang Lu","doi":"10.1109/ICME.2015.7177386","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177386","url":null,"abstract":"Chen et al. proposed a non-negative local coordinate factorization algorithm for feature extraction (NLCF) [1], which incorporated the local coordinate constraint into non-negative matrix factorization (NMF). However, NLCF is actually a unsupervised method without making use of prior information of problems in hand. In this paper, we propose a novel graph regularized non-negative local coordinate factorization with pairwise constraints algorithm (PCGNLCF) for image representation. PCGNLCF incorporates pairwise constraints and graph Laplacian into NLCF. More specifically, we expect that data points having pairwise must-link constraints will have the similar coordinates as much as possible, while data points with pairwise cannot-link constraints will have distinct coordinates as much as possible. Experimental results show the effectiveness of our proposed method in comparison to the state-of-the-art algorithms on several real-world applications.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122416603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-06DOI: 10.1109/ICME.2015.7177382
Xuan Dong, Lu Yuan, Weixin Li, A. Yuille
We analyze the problem of temporally consistent video exposure correction. Existing methods usually either fail to evaluate optimal exposure for every region or cannot get temporally consistent correction results. In addition, the contrast is often lost when the detail is not preserved properly during correction. In this paper, we use the block-based energy minimization to evaluate the temporally consistent exposure, which considers 1) the maximization of the visibility of all contents, 2) keeping the relative difference between neighboring regions, and 3) temporally consistent exposure of corresponding contents in different frames. Then, based on Weber contrast definition, we propose a contrast preserving exposure correction method. Experimental results show that our method enables better temporally consistent exposure evaluation and produces contrast preserving outputs.
{"title":"Temporally consistent region-based video exposure correction","authors":"Xuan Dong, Lu Yuan, Weixin Li, A. Yuille","doi":"10.1109/ICME.2015.7177382","DOIUrl":"https://doi.org/10.1109/ICME.2015.7177382","url":null,"abstract":"We analyze the problem of temporally consistent video exposure correction. Existing methods usually either fail to evaluate optimal exposure for every region or cannot get temporally consistent correction results. In addition, the contrast is often lost when the detail is not preserved properly during correction. In this paper, we use the block-based energy minimization to evaluate the temporally consistent exposure, which considers 1) the maximization of the visibility of all contents, 2) keeping the relative difference between neighboring regions, and 3) temporally consistent exposure of corresponding contents in different frames. Then, based on Weber contrast definition, we propose a contrast preserving exposure correction method. Experimental results show that our method enables better temporally consistent exposure evaluation and produces contrast preserving outputs.","PeriodicalId":146271,"journal":{"name":"2015 IEEE International Conference on Multimedia and Expo (ICME)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124845736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}