Pub Date : 2017-04-05DOI: 10.1109/ICPR.2016.7900003
Vijetha Gattupalli, P. S. Chandakkar, Baoxin Li
Computational visual aesthetics has recently become an active research area. Existing state-of-art methods formulate this as a binary classification task where a given image is predicted to be beautiful or not. In many applications such as image retrieval and enhancement, it is more important to rank images based on their aesthetic quality instead of binary-categorizing them. Furthermore, in such applications, it may be possible that all images belong to the same category. Hence determining the aesthetic ranking of the images is more appropriate. To this end, we formulate a novel problem of ranking images with respect to their aesthetic quality. We construct a new dataset of image pairs with relative labels by carefully selecting images from the popular AVA dataset. Unlike in aesthetics classification, there is no single threshold which would determine the ranking order of the images across our entire dataset. We propose a deep neural network based approach that is trained on image pairs by incorporating principles from relative learning. Results show that such relative training procedure allows our network to rank the images with a higher accuracy than a state-of-art network trained on the same set of images using binary labels.
{"title":"A computational approach to relative aesthetics","authors":"Vijetha Gattupalli, P. S. Chandakkar, Baoxin Li","doi":"10.1109/ICPR.2016.7900003","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7900003","url":null,"abstract":"Computational visual aesthetics has recently become an active research area. Existing state-of-art methods formulate this as a binary classification task where a given image is predicted to be beautiful or not. In many applications such as image retrieval and enhancement, it is more important to rank images based on their aesthetic quality instead of binary-categorizing them. Furthermore, in such applications, it may be possible that all images belong to the same category. Hence determining the aesthetic ranking of the images is more appropriate. To this end, we formulate a novel problem of ranking images with respect to their aesthetic quality. We construct a new dataset of image pairs with relative labels by carefully selecting images from the popular AVA dataset. Unlike in aesthetics classification, there is no single threshold which would determine the ranking order of the images across our entire dataset. We propose a deep neural network based approach that is trained on image pairs by incorporating principles from relative learning. Results show that such relative training procedure allows our network to rank the images with a higher accuracy than a state-of-art network trained on the same set of images using binary labels.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131475939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-02-08DOI: 10.1109/ICPR.2016.7900123
Nhat-Quang Tran, B. Vo, Dinh Q. Phung, B. Vo
Clustering is one of the most common unsupervised learning tasks in machine learning and data mining. Clustering algorithms have been used in a plethora of applications across several scientific fields. However, there has been limited research in the clustering of point patterns - sets or multi-sets of unordered elements - that are found in numerous applications and data sources. In this paper, we propose two approaches for clustering point patterns. The first is a non-parametric method based on novel distances for sets. The second is a model-based approach, formulated via random finite set theory, and solved by the Expectation-Maximization algorithm. Numerical experiments show that the proposed methods perform well on both simulated and real data.
{"title":"Clustering for point pattern data","authors":"Nhat-Quang Tran, B. Vo, Dinh Q. Phung, B. Vo","doi":"10.1109/ICPR.2016.7900123","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7900123","url":null,"abstract":"Clustering is one of the most common unsupervised learning tasks in machine learning and data mining. Clustering algorithms have been used in a plethora of applications across several scientific fields. However, there has been limited research in the clustering of point patterns - sets or multi-sets of unordered elements - that are found in numerous applications and data sources. In this paper, we propose two approaches for clustering point patterns. The first is a non-parametric method based on novel distances for sets. The second is a model-based approach, formulated via random finite set theory, and solved by the Expectation-Maximization algorithm. Numerical experiments show that the proposed methods perform well on both simulated and real data.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126284493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7899606
Necati Cihan Camgöz, Simon Hadfield, Oscar Koller, R. Bowden
In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.
{"title":"Using Convolutional 3D Neural Networks for User-independent continuous gesture recognition","authors":"Necati Cihan Camgöz, Simon Hadfield, Oscar Koller, R. Bowden","doi":"10.1109/ICPR.2016.7899606","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899606","url":null,"abstract":"In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126612111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7899760
P. Desrosiers, M. Daoudi, M. Devanne
We propose a novel geometric framework for analyzing spontaneous facial expressions, with the specific goal of comparing, matching, and averaging the shapes of landmarks trajectories. Here we represent facial expressions by the motion of the landmarks across the time. The trajectories are represented by curves. We use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of these trajectories. In terms of empirical evaluation, our results on two databases: UvA-NEMO and Cohn-Kanade CK+ are very promising. From a theoretical perspective, this framework allows formal statistical inferences, such as generation of facial expressions.
{"title":"Novel generative model for facial expressions based on statistical shape analysis of landmarks trajectories","authors":"P. Desrosiers, M. Daoudi, M. Devanne","doi":"10.1109/ICPR.2016.7899760","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899760","url":null,"abstract":"We propose a novel geometric framework for analyzing spontaneous facial expressions, with the specific goal of comparing, matching, and averaging the shapes of landmarks trajectories. Here we represent facial expressions by the motion of the landmarks across the time. The trajectories are represented by curves. We use elastic shape analysis of these curves to develop a Riemannian framework for analyzing shapes of these trajectories. In terms of empirical evaluation, our results on two databases: UvA-NEMO and Cohn-Kanade CK+ are very promising. From a theoretical perspective, this framework allows formal statistical inferences, such as generation of facial expressions.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129499919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7900016
J. Valverde-Rebaza, M. Roche, P. Poncelet, A. Lopes
Link prediction is a “hot topic” in network analysis and has been largely used for friendship recommendation in social networks. With the increased use of location-based services, it is possible to improve the accuracy of link prediction methods by using the mobility of users. The majority of the link prediction methods focus on the importance of location for their visitors, disregarding the strength of relationships existing between these visitors. We, therefore, propose three new methods for friendship prediction by combining, efficiently, social and mobility patterns of users in location-based social networks (LBSNs). Experiments conducted on real-world datasets demonstrate that our proposals achieve a competitive performance with methods from the literature and, in most of the cases, outperform them. Moreover, our proposals use less computational resources by reducing considerably the number of irrelevant predictions, making the link prediction task more efficient and applicable for real world applications.
{"title":"Exploiting social and mobility patterns for friendship prediction in location-based social networks","authors":"J. Valverde-Rebaza, M. Roche, P. Poncelet, A. Lopes","doi":"10.1109/ICPR.2016.7900016","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7900016","url":null,"abstract":"Link prediction is a “hot topic” in network analysis and has been largely used for friendship recommendation in social networks. With the increased use of location-based services, it is possible to improve the accuracy of link prediction methods by using the mobility of users. The majority of the link prediction methods focus on the importance of location for their visitors, disregarding the strength of relationships existing between these visitors. We, therefore, propose three new methods for friendship prediction by combining, efficiently, social and mobility patterns of users in location-based social networks (LBSNs). Experiments conducted on real-world datasets demonstrate that our proposals achieve a competitive performance with methods from the literature and, in most of the cases, outperform them. Moreover, our proposals use less computational resources by reducing considerably the number of irrelevant predictions, making the link prediction task more efficient and applicable for real world applications.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"143 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129484956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7899928
H. Chatoux, F. Lecellier, C. Fernandez-Maloigne
A great deal of features detectors and descriptors are proposed every years for several computer vision applications. In this paper, we concentrate on dense detector applied to different descriptors. Eight descriptors are compared, three from gradient based family (SIFT, SURF, DAISY), others from binary category (BRIEF, ORB, BRISK, FREAK and LATCH). These descriptors are created and defined with certain invariance properties. We want to verify their invariances with various geometric and photometric transformations, varying one at a time. Deformations are computed from an original image. Descriptors are tested on five transformations: scale, rotation, viewpoint, illumination plus reflection. Overall, descriptors display the right invariances. This paper's objective is to establish a reproducible protocol to test descriptors invariances.
{"title":"Comparative study of descriptors with dense key points","authors":"H. Chatoux, F. Lecellier, C. Fernandez-Maloigne","doi":"10.1109/ICPR.2016.7899928","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899928","url":null,"abstract":"A great deal of features detectors and descriptors are proposed every years for several computer vision applications. In this paper, we concentrate on dense detector applied to different descriptors. Eight descriptors are compared, three from gradient based family (SIFT, SURF, DAISY), others from binary category (BRIEF, ORB, BRISK, FREAK and LATCH). These descriptors are created and defined with certain invariance properties. We want to verify their invariances with various geometric and photometric transformations, varying one at a time. Deformations are computed from an original image. Descriptors are tested on five transformations: scale, rotation, viewpoint, illumination plus reflection. Overall, descriptors display the right invariances. This paper's objective is to establish a reproducible protocol to test descriptors invariances.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121954066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7899660
Serge Bobbia, Y. Benezeth, Julien Dubois
Region of interest selection is an essential part for remote photoplethysmography (rPPG) algorithms. Most of the time, face detection provided by a supervised learning of physical appearance features coupled with skin detection is used for region of interest selection. However, both methods have several limitations and we propose to implicitly select living skin tissue via their particular pulsatility feature. The input video stream is decomposed into several temporal superpixels from which pulse signals are extracted. Pulsatility measure for each temporal superpixel is then used to merge pulse traces and estimate the photoplethysmogram signal. This allows to select skin tissue and furthermore to favor areas where the pulse trace is more predominant. Experimental results showed that our method perform better than state of the art algorithms without any critical face or skin detection.
{"title":"Remote photoplethysmography based on implicit living skin tissue segmentation","authors":"Serge Bobbia, Y. Benezeth, Julien Dubois","doi":"10.1109/ICPR.2016.7899660","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899660","url":null,"abstract":"Region of interest selection is an essential part for remote photoplethysmography (rPPG) algorithms. Most of the time, face detection provided by a supervised learning of physical appearance features coupled with skin detection is used for region of interest selection. However, both methods have several limitations and we propose to implicitly select living skin tissue via their particular pulsatility feature. The input video stream is decomposed into several temporal superpixels from which pulse signals are extracted. Pulsatility measure for each temporal superpixel is then used to merge pulse traces and estimate the photoplethysmogram signal. This allows to select skin tissue and furthermore to favor areas where the pulse trace is more predominant. Experimental results showed that our method perform better than state of the art algorithms without any critical face or skin detection.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"2009 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127334850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7899736
X. Nguyen, T. Nguyen, F. Charpillet
In this paper, we propose a new local descriptor for action recognition in depth images. The proposed descriptor relies on surface normals in 4D space of depth, time, spatial coordinates and higher-order partial derivatives of depth values along spatial coordinates. In order to classify actions, we follow the traditional Bag-of-words (BoW) approach, and propose two encoding methods termed Multi-Scale Fisher Vector (MSFV) and Temporal Sparse Coding based Fisher Vector Coding (TSCFVC) to form global representations of depth sequences. The high-dimensional action descriptors resulted from the two encoding methods are fed to a linear SVM for efficient action classification. Our proposed methods are evaluated on two public benchmark datasets, MSRAction3D and MSRGesture3D. The experimental result shows the effectiveness of the proposed methods on both the datasets.
{"title":"Effective surface normals based action recognition in depth images","authors":"X. Nguyen, T. Nguyen, F. Charpillet","doi":"10.1109/ICPR.2016.7899736","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899736","url":null,"abstract":"In this paper, we propose a new local descriptor for action recognition in depth images. The proposed descriptor relies on surface normals in 4D space of depth, time, spatial coordinates and higher-order partial derivatives of depth values along spatial coordinates. In order to classify actions, we follow the traditional Bag-of-words (BoW) approach, and propose two encoding methods termed Multi-Scale Fisher Vector (MSFV) and Temporal Sparse Coding based Fisher Vector Coding (TSCFVC) to form global representations of depth sequences. The high-dimensional action descriptors resulted from the two encoding methods are fed to a linear SVM for efficient action classification. Our proposed methods are evaluated on two public benchmark datasets, MSRAction3D and MSRGesture3D. The experimental result shows the effectiveness of the proposed methods on both the datasets.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121786348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7900130
Kathryn Leonard, Géraldine Morin, S. Hahmann, A. Carlier
This paper presents a multilevel analysis of 2D shapes and uses it to find similarities between the different parts of a shape. Such an analysis is important for many applications such as shape comparison, editing, and compression. Our robust and stable method decomposes a shape into parts, determines a parts hierarchy, and measures similarity between parts based on a salience measure on the medial axis, the Weighted Extended Distance Function, providing a multi-resolution partition of the shape that is stable across scale and articulation. Comparison with an extensive user study on the MPEG-7 database demonstrates that our geometric results are consistent with user perception.
{"title":"A 2D shape structure for decomposition and part similarity","authors":"Kathryn Leonard, Géraldine Morin, S. Hahmann, A. Carlier","doi":"10.1109/ICPR.2016.7900130","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7900130","url":null,"abstract":"This paper presents a multilevel analysis of 2D shapes and uses it to find similarities between the different parts of a shape. Such an analysis is important for many applications such as shape comparison, editing, and compression. Our robust and stable method decomposes a shape into parts, determines a parts hierarchy, and measures similarity between parts based on a salience measure on the medial axis, the Weighted Extended Distance Function, providing a multi-resolution partition of the shape that is stable across scale and articulation. Comparison with an extensive user study on the MPEG-7 database demonstrates that our geometric results are consistent with user perception.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125241837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-04DOI: 10.1109/ICPR.2016.7899991
Rémi Giraud, Vinh-Thong Ta, N. Papadakis
Superpixel decomposition methods are generally used as a pre-processing step to speed up image processing tasks. They group the pixels of an image into homogeneous regions while trying to respect existing contours. For all state-of-the-art superpixel decomposition methods, a trade-off is made between 1) computational time, 2) adherence to image contours and 3) regularity and compactness of the decomposition. In this paper, we propose a fast method to compute Superpixels with Contour Adherence using Linear Path (SCALP) in an iterative clustering framework. The distance computed when trying to associate a pixel to a superpixel during the clustering is enhanced by considering the linear path to the superpixel barycenter. The proposed framework produces regular and compact superpixels that adhere to the image contours. We provide a detailed evaluation of SCALP on the standard Berkeley Segmentation Dataset. The obtained results outperform state-of-the-art methods in terms of standard superpixel and contour detection metrics.
{"title":"SCALP: Superpixels with Contour Adherence using Linear Path","authors":"Rémi Giraud, Vinh-Thong Ta, N. Papadakis","doi":"10.1109/ICPR.2016.7899991","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899991","url":null,"abstract":"Superpixel decomposition methods are generally used as a pre-processing step to speed up image processing tasks. They group the pixels of an image into homogeneous regions while trying to respect existing contours. For all state-of-the-art superpixel decomposition methods, a trade-off is made between 1) computational time, 2) adherence to image contours and 3) regularity and compactness of the decomposition. In this paper, we propose a fast method to compute Superpixels with Contour Adherence using Linear Path (SCALP) in an iterative clustering framework. The distance computed when trying to associate a pixel to a superpixel during the clustering is enhanced by considering the linear path to the superpixel barycenter. The proposed framework produces regular and compact superpixels that adhere to the image contours. We provide a detailed evaluation of SCALP on the standard Berkeley Segmentation Dataset. The obtained results outperform state-of-the-art methods in terms of standard superpixel and contour detection metrics.","PeriodicalId":151180,"journal":{"name":"2016 23rd International Conference on Pattern Recognition (ICPR)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133578007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}