Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7025648
Catarina Brites, F. Pereira
Multiview Wyner-Ziv (MV-WZ) video coding rate-distortion (RD) performance is highly influenced by the adopted correlation noise model (CNM). In the related literature, the statistics of the correlation noise between the original frame and the side information (SI), typically resulting from the fusion of temporally and inter-view created SIs, is modelled by a Laplacian distribution. In most cases, the Laplacian CNM parameter is estimated using an offline approach, assuming that either the SI is available at the encoder or the originals are available at the decoder which is not realistic. In this context, this paper proposes the first practical, online CNM solution for a multiview transform domain WZ (MV-TDWZ) video codec. The online estimation of the Laplacian CNM parameter is performed at the decoder based on metrics exploring both the temporal and inter-view correlations with two levels of granularity, notably transform band and transform coefficient. The results obtained show that better RD performance is achieved for the finest granularity level since the inter-view, temporal and spatial correlations are exploited with the highest adaptation.
{"title":"Correlation noise modeling for multiview transform domain Wyner-Ziv video coding","authors":"Catarina Brites, F. Pereira","doi":"10.1109/ICIP.2014.7025648","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025648","url":null,"abstract":"Multiview Wyner-Ziv (MV-WZ) video coding rate-distortion (RD) performance is highly influenced by the adopted correlation noise model (CNM). In the related literature, the statistics of the correlation noise between the original frame and the side information (SI), typically resulting from the fusion of temporally and inter-view created SIs, is modelled by a Laplacian distribution. In most cases, the Laplacian CNM parameter is estimated using an offline approach, assuming that either the SI is available at the encoder or the originals are available at the decoder which is not realistic. In this context, this paper proposes the first practical, online CNM solution for a multiview transform domain WZ (MV-TDWZ) video codec. The online estimation of the Laplacian CNM parameter is performed at the decoder based on metrics exploring both the temporal and inter-view correlations with two levels of granularity, notably transform band and transform coefficient. The results obtained show that better RD performance is achieved for the finest granularity level since the inter-view, temporal and spatial correlations are exploited with the highest adaptation.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"99 1","pages":"3204-3208"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74633179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7025699
Xianghua Ying, Xiang Mei, Sen Yang, G. Wang, H. Zha
In Hartley-Kang's paper [7], they directly treated a planar calibration pattern as an image to construct an image pair together with a radial distorted image of the planar calibration pattern, and then proposed a very efficient method to determine the center of radial distortion by estimating the epipole in the radial distorted image. After determined the center of radial distortion, a least square method was utilized to recover the radial distortion function using the monotonicity constraints. In this paper, we present a convex optimization method to recover the radial distortion function using the same constraints as those required by Hartley-Kang's method, whereas our method can obtain better results of radial distortion correction. The experiments validate our approach.
{"title":"Radial distortion correction from a single image of a planar calibration pattern using convex optimization","authors":"Xianghua Ying, Xiang Mei, Sen Yang, G. Wang, H. Zha","doi":"10.1109/ICIP.2014.7025699","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025699","url":null,"abstract":"In Hartley-Kang's paper [7], they directly treated a planar calibration pattern as an image to construct an image pair together with a radial distorted image of the planar calibration pattern, and then proposed a very efficient method to determine the center of radial distortion by estimating the epipole in the radial distorted image. After determined the center of radial distortion, a least square method was utilized to recover the radial distortion function using the monotonicity constraints. In this paper, we present a convex optimization method to recover the radial distortion function using the same constraints as those required by Hartley-Kang's method, whereas our method can obtain better results of radial distortion correction. The experiments validate our approach.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"58 1","pages":"3440-3443"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73028491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7025879
Hui Zhou, T. Ahonen
Alpha matting for single image is an inherently under-constrained problem and thus normally requires user input. In this paper, an automatic, bottom-up matting algorithm using defocus cue is proposed. Different from most defocus matting algorithms, we first extract matting components by applying unsupervised spectral matting algorithm on single image. The defocus cue is then used for classifying matting components to form a complete foreground matte. This approach gives more robust result because focus estimation is used in component level rather than pixel level.
{"title":"Automatic defocus spectral matting","authors":"Hui Zhou, T. Ahonen","doi":"10.1109/ICIP.2014.7025879","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025879","url":null,"abstract":"Alpha matting for single image is an inherently under-constrained problem and thus normally requires user input. In this paper, an automatic, bottom-up matting algorithm using defocus cue is proposed. Different from most defocus matting algorithms, we first extract matting components by applying unsupervised spectral matting algorithm on single image. The defocus cue is then used for classifying matting components to form a complete foreground matte. This approach gives more robust result because focus estimation is used in component level rather than pixel level.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"135 1","pages":"4328-4332"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75293283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7026002
Andrea Mazzù, Simone Chiappino, L. Marcenaro, C. Regazzoni
Detection of dim moving point targets in cluttered background can have a great impact on the tracking performances. This may become a crucial problem, especially in low-SNR environments, where target characteristics are highly susceptible to corruption. In this paper, an extended target model, namely Interacting Multiple Model (IMM), applied to Track-Before-Detect (TBD) based detection algorithm, for far objects, in infrared (IR) sequences is presented. The approach can automatically adapts the kinematic parameter estimations, such as position and velocity, in accordance with the predictions as dimensions of the target change. A sub-par sensor can cause tracking problems. In particular, for a single object, noisy observations (i.e. fragmented measures) could be associated to different tracks. In order to avoid this problem, presented framework introduces a cooperative mechanism between Joint Probabilistic Data Association Filter (JPDAF) and IMM. The experimental results on real and simulated sequences demonstrate effectiveness of the proposed approach.
{"title":"A track-before-detect algorithm using joint probabilistic data association filter and interacting multiple models","authors":"Andrea Mazzù, Simone Chiappino, L. Marcenaro, C. Regazzoni","doi":"10.1109/ICIP.2014.7026002","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026002","url":null,"abstract":"Detection of dim moving point targets in cluttered background can have a great impact on the tracking performances. This may become a crucial problem, especially in low-SNR environments, where target characteristics are highly susceptible to corruption. In this paper, an extended target model, namely Interacting Multiple Model (IMM), applied to Track-Before-Detect (TBD) based detection algorithm, for far objects, in infrared (IR) sequences is presented. The approach can automatically adapts the kinematic parameter estimations, such as position and velocity, in accordance with the predictions as dimensions of the target change. A sub-par sensor can cause tracking problems. In particular, for a single object, noisy observations (i.e. fragmented measures) could be associated to different tracks. In order to avoid this problem, presented framework introduces a cooperative mechanism between Joint Probabilistic Data Association Filter (JPDAF) and IMM. The experimental results on real and simulated sequences demonstrate effectiveness of the proposed approach.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"7 1","pages":"4947-4951"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75492616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7026003
R. Merkel, J. Dittmann, M. Hildebrandt
In forensic applications, traces are often hard to detect and segment from challenging substrates at crime scenes. In this paper, we propose to use the temporal domain of forensic signals as a novel feature space to provide additional information about a trace. In particular we introduce a degree of persistence measure and a protocol for its computation, allowing for a flexible extraction of time domain information based on different features and approximation techniques. At the example of latent fingerprints on semi-/porous surfaces and a CWL sensor, we show the potential of such approach to achieve an increased performance for the challenge of separating prints from background. Based on 36 earlier introduced spectral texture features, we achieve an increased separation performance (0.01 ≤ Δκ ≤ 0.13, respective 0.6% to 6.7%) when using the time domain signal instead of spatial segmentation. The test set consists of 60 different prints on photographic-, catalogue- and copy paper, acquired in a sequence of ten times. We observe a dependency on the used surface as well as the number of consecutive images and identify the accuracy and reproducibility of the capturing device as the main limitation, proposing additional steps for even higher performances in future work.
{"title":"Latent fingerprint persistence: A new temporal feature space for forensic trace evidence analysis","authors":"R. Merkel, J. Dittmann, M. Hildebrandt","doi":"10.1109/ICIP.2014.7026003","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026003","url":null,"abstract":"In forensic applications, traces are often hard to detect and segment from challenging substrates at crime scenes. In this paper, we propose to use the temporal domain of forensic signals as a novel feature space to provide additional information about a trace. In particular we introduce a degree of persistence measure and a protocol for its computation, allowing for a flexible extraction of time domain information based on different features and approximation techniques. At the example of latent fingerprints on semi-/porous surfaces and a CWL sensor, we show the potential of such approach to achieve an increased performance for the challenge of separating prints from background. Based on 36 earlier introduced spectral texture features, we achieve an increased separation performance (0.01 ≤ Δκ ≤ 0.13, respective 0.6% to 6.7%) when using the time domain signal instead of spatial segmentation. The test set consists of 60 different prints on photographic-, catalogue- and copy paper, acquired in a sequence of ten times. We observe a dependency on the used surface as well as the number of consecutive images and identify the accuracy and reproducibility of the capturing device as the main limitation, proposing additional steps for even higher performances in future work.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"12 1","pages":"4952-4956"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73953558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7026230
J. Fageot, E. Bostan, M. Unser
We study the statistics of wavelet coefficients of non-Gaussian images, focusing mainly on the behaviour at coarse scales. We assume that an image can be whitened by a fractional Laplacian operator, which is consistent with an ∥ω∥-γ spectral decay. In other words, we model images as sparse and self-similar stochastic processes within the framework of generalised innovation models. We show that the wavelet coefficients at coarse scales are asymptotically Gaussian even if the prior model for fine scales is sparse. We further refine our analysis by deriving the theoretical evolution of the cumulants of wavelet coefficients across scales. Especially, the evolution of the kurtosis supplies a theoretical prediction for the Gaussianity level at each scale. Finally, we provide simulations and experiments that support our theoretical predictions.
{"title":"Statistics of wavelet coefficients for sparse self-similar images","authors":"J. Fageot, E. Bostan, M. Unser","doi":"10.1109/ICIP.2014.7026230","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026230","url":null,"abstract":"We study the statistics of wavelet coefficients of non-Gaussian images, focusing mainly on the behaviour at coarse scales. We assume that an image can be whitened by a fractional Laplacian operator, which is consistent with an ∥ω∥-γ spectral decay. In other words, we model images as sparse and self-similar stochastic processes within the framework of generalised innovation models. We show that the wavelet coefficients at coarse scales are asymptotically Gaussian even if the prior model for fine scales is sparse. We further refine our analysis by deriving the theoretical evolution of the cumulants of wavelet coefficients across scales. Especially, the evolution of the kurtosis supplies a theoretical prediction for the Gaussianity level at each scale. Finally, we provide simulations and experiments that support our theoretical predictions.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"12 1","pages":"6096-6100"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74344857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7025373
W. Ye, K. Ma
A new demosaicing approach has been introduced recently, which is based on conducting interpolation on the generated residual fields rather than on the color-component difference fields as commonly practiced in most demosaicing methods. In view of its attractive performance delivered by such residual interpolation (RI) strategy, a new RI-based demosaicing method is proposed in this paper that has shown much improved performance. The key success of our approach lies in that the RI process is iteratively deployed to all the three channels for generating a more accurately reconstructed G channel, from which the R channel and the B channel can be better reconstructed as well. Extensive simulations conducted on two commonly-used test datasets have clearly demonstrated that our algorithm is superior to the existing state-of-the-art demosaicing methods, both on objective performance evaluation and on subjective perceptual quality.
{"title":"Image demosaicing by using iterative residual interpolation","authors":"W. Ye, K. Ma","doi":"10.1109/ICIP.2014.7025373","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025373","url":null,"abstract":"A new demosaicing approach has been introduced recently, which is based on conducting interpolation on the generated residual fields rather than on the color-component difference fields as commonly practiced in most demosaicing methods. In view of its attractive performance delivered by such residual interpolation (RI) strategy, a new RI-based demosaicing method is proposed in this paper that has shown much improved performance. The key success of our approach lies in that the RI process is iteratively deployed to all the three channels for generating a more accurately reconstructed G channel, from which the R channel and the B channel can be better reconstructed as well. Extensive simulations conducted on two commonly-used test datasets have clearly demonstrated that our algorithm is superior to the existing state-of-the-art demosaicing methods, both on objective performance evaluation and on subjective perceptual quality.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"11 1","pages":"1862-1866"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78596930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7025203
T. Nawaz, A. Cavallaro, B. Rinner
We present an end-to-end approach for trajectory clustering from aerial videos that enables the extraction of motion patterns in urban scenes. Camera motion is first compensated by mapping object trajectories on a reference plane. Then clustering is performed based on statistics from the Discrete Wavelet Transform coefficients extracted from the trajectories. Finally, motion patterns are identified by distance minimization from the centroids of the trajectory clusters. The experimental validation on four datasets shows the effectiveness of the proposed approach in extracting trajectory clusters. We also make available two new real-world aerial video datasets together with the estimated object trajectories and ground-truth cluster labeling.
{"title":"Trajectory clustering for motion pattern extraction in aerial videos","authors":"T. Nawaz, A. Cavallaro, B. Rinner","doi":"10.1109/ICIP.2014.7025203","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025203","url":null,"abstract":"We present an end-to-end approach for trajectory clustering from aerial videos that enables the extraction of motion patterns in urban scenes. Camera motion is first compensated by mapping object trajectories on a reference plane. Then clustering is performed based on statistics from the Discrete Wavelet Transform coefficients extracted from the trajectories. Finally, motion patterns are identified by distance minimization from the centroids of the trajectory clusters. The experimental validation on four datasets shows the effectiveness of the proposed approach in extracting trajectory clusters. We also make available two new real-world aerial video datasets together with the estimated object trajectories and ground-truth cluster labeling.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"19 1","pages":"1016-1020"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78613013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7025063
G. Paul, Khoury Elie, Meignier Sylvain, Odobez Jean-Marc, D. Paul
We investigate the problem of face identification in broadcast programs where people names are obtained from text overlays automatically processed with Optical Character Recognition (OCR) and further linked to the faces throughout the video. To solve the face-name association and propagation, we propose a novel approach that combines the positive effects of two Conditional Random Field (CRF) models: a CRF for person diarization (joint temporal segmentation and association of voices and faces) that benefit from the combination of multiple cues including as main contributions the use of identification sources (OCR appearances) and recurrent local face visual background (LFB) playing the role of a namedness feature; a second CRF for the joint identification of the person clusters that improves identification performance thanks to the use of further diarization statistics. Experiments conducted on a recent and substantial public dataset of 7 different shows demonstrate the interest and complementarity of the different modeling steps and information sources, leading to state of the art results.
{"title":"A conditional random field approach for face identification in broadcast news using overlaid text","authors":"G. Paul, Khoury Elie, Meignier Sylvain, Odobez Jean-Marc, D. Paul","doi":"10.1109/ICIP.2014.7025063","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025063","url":null,"abstract":"We investigate the problem of face identification in broadcast programs where people names are obtained from text overlays automatically processed with Optical Character Recognition (OCR) and further linked to the faces throughout the video. To solve the face-name association and propagation, we propose a novel approach that combines the positive effects of two Conditional Random Field (CRF) models: a CRF for person diarization (joint temporal segmentation and association of voices and faces) that benefit from the combination of multiple cues including as main contributions the use of identification sources (OCR appearances) and recurrent local face visual background (LFB) playing the role of a namedness feature; a second CRF for the joint identification of the person clusters that improves identification performance thanks to the use of further diarization statistics. Experiments conducted on a recent and substantial public dataset of 7 different shows demonstrate the interest and complementarity of the different modeling steps and information sources, leading to state of the art results.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"28 1","pages":"318-322"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78197285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-10-01DOI: 10.1109/ICIP.2014.7025944
Fabien Racapé, D. Doshkov, Martin Köppel, P. Ndjiki-Nya
In this paper, an improved 2D+t texture completion framework is proposed, providing high visual quality of completed dynamic textures. A Spatiotemporal Autoregressive model (STAR) is used to propagate the signal of several available frames onto frames containing missing textures. A Gaussian white noise classically drives the model to enable texture innovation. To improve this method, an innovation process is proposed, that uses texture information from available training frames. The proposed method is deterministic, which solves a key problem for applications such as synthesis-based video coding. Compression simulations show potential bitrate savings up to 49% on texture sequences at comparable visual quality. Video results are provided online to allow assessing the visual quality of completed textures.
{"title":"2D+t autoregressive framework for video texture completion","authors":"Fabien Racapé, D. Doshkov, Martin Köppel, P. Ndjiki-Nya","doi":"10.1109/ICIP.2014.7025944","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025944","url":null,"abstract":"In this paper, an improved 2D+t texture completion framework is proposed, providing high visual quality of completed dynamic textures. A Spatiotemporal Autoregressive model (STAR) is used to propagate the signal of several available frames onto frames containing missing textures. A Gaussian white noise classically drives the model to enable texture innovation. To improve this method, an innovation process is proposed, that uses texture information from available training frames. The proposed method is deterministic, which solves a key problem for applications such as synthesis-based video coding. Compression simulations show potential bitrate savings up to 49% on texture sequences at comparable visual quality. Video results are provided online to allow assessing the visual quality of completed textures.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"7 1","pages":"4657-4661"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78379834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}