Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153608
Nicolas Voiron, A. Benoît, Andrei Filip, P. Lambert, B. Ionescu
In our data driven world, clustering is of major importance to help end-users and decision makers understanding information structures. Supervised learning techniques rely on ground truth to perform the classification and are usually subject to overtraining issues. On the other hand, unsupervised clustering techniques study the structure of the data without disposing of any training data. Given the difficulty of the task, unsupervised learning tends to provide inferior results to supervised learning. A compromise is then to use learning only for some of the ambiguous classes, in order to boost performances. In this context, this paper studies the impact of pairwise constraints to unsupervised Spectral Clustering. We introduce a new generalization of constraint propagation which maximizes partitioning quality while reducing annotation costs. Experiments show the efficiency of the proposed scheme.
{"title":"Semi-supervised spectral clustering with automatic propagation of pairwise constraints","authors":"Nicolas Voiron, A. Benoît, Andrei Filip, P. Lambert, B. Ionescu","doi":"10.1109/CBMI.2015.7153608","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153608","url":null,"abstract":"In our data driven world, clustering is of major importance to help end-users and decision makers understanding information structures. Supervised learning techniques rely on ground truth to perform the classification and are usually subject to overtraining issues. On the other hand, unsupervised clustering techniques study the structure of the data without disposing of any training data. Given the difficulty of the task, unsupervised learning tends to provide inferior results to supervised learning. A compromise is then to use learning only for some of the ambiguous classes, in order to boost performances. In this context, this paper studies the impact of pairwise constraints to unsupervised Spectral Clustering. We introduce a new generalization of constraint propagation which maximizes partitioning quality while reducing annotation costs. Experiments show the efficiency of the proposed scheme.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116995868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153606
Omar Seddati, S. Dupont, S. Mahmoudi
In this paper, we present a system for sketch classification and similarity search. We used deep convolution neural networks (ConvNets), state of the art in the field of image recognition. They enable both classification and medium/highlevel features extraction. We make use of ConvNets features as a basis for similarity search using k-Nearest Neighbors (kNN). Evaluation are performed on the TU-Berlin benchmark. Our main contributions are threefold: first, we use ConvNets in contrast to most previous approaches based essentially on hand crafted features. Secondly, we propose a ConvNet that is both more accurate and lighter/faster than the two only previous attempts at making use of ConvNets for handsketch recognition. We reached an accuracy of 75.42%. Third, we shown that similarly to their application on natural images, ConvNets allow the extraction of medium-level and high-level features (depending on the depth) which can be used for similarity search.1
{"title":"DeepSketch: Deep convolutional neural networks for sketch recognition and similarity search","authors":"Omar Seddati, S. Dupont, S. Mahmoudi","doi":"10.1109/CBMI.2015.7153606","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153606","url":null,"abstract":"In this paper, we present a system for sketch classification and similarity search. We used deep convolution neural networks (ConvNets), state of the art in the field of image recognition. They enable both classification and medium/highlevel features extraction. We make use of ConvNets features as a basis for similarity search using k-Nearest Neighbors (kNN). Evaluation are performed on the TU-Berlin benchmark. Our main contributions are threefold: first, we use ConvNets in contrast to most previous approaches based essentially on hand crafted features. Secondly, we propose a ConvNet that is both more accurate and lighter/faster than the two only previous attempts at making use of ConvNets for handsketch recognition. We reached an accuracy of 75.42%. Third, we shown that similarly to their application on natural images, ConvNets allow the extraction of medium-level and high-level features (depending on the depth) which can be used for similarity search.1","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128756009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153613
B. Boteanu, Ionut Mironica, B. Ionescu
This article addresses the issue of social image search result diversification. We propose a novel perspective for the diversification problem via Relevance Feedback (RF). Traditional RF introduces the user in the processing loop by harvesting feedback about the relevance of the search results. This information is used for recomputing a better representation of the data needed. The novelty of our work is in exploiting this concept in a completely automated manner via pseudo-relevance, while pushing in priority the diversification of the results, rather than relevance. User feedback is simulated automatically by selecting positive and negative examples with regard to relevance, from the initial query results. Unsupervised hierarchical clustering is used to re-group images according to their content. Diversification is finally achieved with a re-ranking approach. Experimental validation on Flickr data shows the advantages of this approach.
{"title":"Hierarchical clustering pseudo-relevance feedback for social image search result diversification","authors":"B. Boteanu, Ionut Mironica, B. Ionescu","doi":"10.1109/CBMI.2015.7153613","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153613","url":null,"abstract":"This article addresses the issue of social image search result diversification. We propose a novel perspective for the diversification problem via Relevance Feedback (RF). Traditional RF introduces the user in the processing loop by harvesting feedback about the relevance of the search results. This information is used for recomputing a better representation of the data needed. The novelty of our work is in exploiting this concept in a completely automated manner via pseudo-relevance, while pushing in priority the diversification of the results, rather than relevance. User feedback is simulated automatically by selecting positive and negative examples with regard to relevance, from the initial query results. Unsupervised hierarchical clustering is used to re-group images according to their content. Diversification is finally achieved with a re-ranking approach. Experimental validation on Flickr data shows the advantages of this approach.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115390870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153611
C. E. Santos, Ewa Kijak, G. Gravier, W. R. Schwartz
Face recognition has been largely studied in past years. However, most of the related work focus on increasing accuracy and/or speed to test a single pair probe-subject. In this work, we present a novel method inspired by the success of locality sensing hashing (LSH) applied to large general purpose datasets and by the robustness provided by partial least squares (PLS) analysis when applied to large sets of feature vectors for face recognition. The result is a robust hashing method compatible with feature combination for fast computation of a short list of candidates in a large gallery of subjects. We provide theoretical support and practical principles for the proposed method that may be reused in further development of hash functions applied to face galleries. The proposed method is evaluated on the FERET and FRGCv1 datasets and compared to other methods in the literature. Experimental results show that the proposed approach is able to speedup 16 times compared to scanning all subjects in the face gallery.
{"title":"Learning to hash faces using large feature vectors","authors":"C. E. Santos, Ewa Kijak, G. Gravier, W. R. Schwartz","doi":"10.1109/CBMI.2015.7153611","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153611","url":null,"abstract":"Face recognition has been largely studied in past years. However, most of the related work focus on increasing accuracy and/or speed to test a single pair probe-subject. In this work, we present a novel method inspired by the success of locality sensing hashing (LSH) applied to large general purpose datasets and by the robustness provided by partial least squares (PLS) analysis when applied to large sets of feature vectors for face recognition. The result is a robust hashing method compatible with feature combination for fast computation of a short list of candidates in a large gallery of subjects. We provide theoretical support and practical principles for the proposed method that may be reused in further development of hash functions applied to face galleries. The proposed method is evaluated on the FERET and FRGCv1 datasets and compared to other methods in the literature. Experimental results show that the proposed approach is able to speedup 16 times compared to scanning all subjects in the face gallery.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121556826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153623
K. Schutte, H. Bouma, J. Schavemaker, L. Daniele, Maya Sappelli, G. Koot, P. Eendebak, G. Azzopardi, Martijn Spitters, M. D. Boer, M. Kruithof, Paul Brandt
The number of networked cameras is growing exponentially. Multiple applications in different domains result in an increasing need to search semantically over video sensor data. In this paper, we present the GOOSE demonstrator, which is a real-time general-purpose search engine that allows users to pose natural language queries to retrieve corresponding images. Top-down, this demonstrator interprets queries, which are presented as an intuitive graph to collect user feedback. Bottom-up, the system automatically recognizes and localizes concepts in images and it can incrementally learn novel concepts. A smart ranking combines both and allows effective retrieval of relevant images.
{"title":"Interactive detection of incrementally learned concepts in images with ranking and semantic query interpretation","authors":"K. Schutte, H. Bouma, J. Schavemaker, L. Daniele, Maya Sappelli, G. Koot, P. Eendebak, G. Azzopardi, Martijn Spitters, M. D. Boer, M. Kruithof, Paul Brandt","doi":"10.1109/CBMI.2015.7153623","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153623","url":null,"abstract":"The number of networked cameras is growing exponentially. Multiple applications in different domains result in an increasing need to search semantically over video sensor data. In this paper, we present the GOOSE demonstrator, which is a real-time general-purpose search engine that allows users to pose natural language queries to retrieve corresponding images. Top-down, this demonstrator interprets queries, which are presented as an intuitive graph to collect user feedback. Bottom-up, the system automatically recognizes and localizes concepts in images and it can incrementally learn novel concepts. A smart ranking combines both and allows effective retrieval of relevant images.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127662600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153626
Abdelkader Hamadi, P. Mulhem, G. Quénot
The automated indexing of image and video is a difficult problem because of the “distance” between the arrays of numbers encoding these documents and the concepts (e.g. people, places, events or objects) with which we wish to annotate them. Methods exist for this but their results are far from satisfactory in terms of generality and accuracy. Existing methods typically use a single set of such examples and consider it as uniform. This is not optimal because the same concept may appear in various contexts and its appearance may be very different depending upon these contexts. The context has been widely used in the state of the art to treat various problems. However, the temporal context seems to be the most crucial and the most effective for the case of videos. In this paper, we present a comparative study between two methods exploiting the temporal context for semantic video indexing. The proposed approaches use temporal information that is derived from two different sources: low-level content and semantic information. Our experiments on TRECVID'12 collection showed interesting results that confirm the usefulness of the temporal context and demonstrate which of the two approaches is more effective.
{"title":"Temporal re-scoring vs. temporal descriptors for semantic indexing of videos","authors":"Abdelkader Hamadi, P. Mulhem, G. Quénot","doi":"10.1109/CBMI.2015.7153626","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153626","url":null,"abstract":"The automated indexing of image and video is a difficult problem because of the “distance” between the arrays of numbers encoding these documents and the concepts (e.g. people, places, events or objects) with which we wish to annotate them. Methods exist for this but their results are far from satisfactory in terms of generality and accuracy. Existing methods typically use a single set of such examples and consider it as uniform. This is not optimal because the same concept may appear in various contexts and its appearance may be very different depending upon these contexts. The context has been widely used in the state of the art to treat various problems. However, the temporal context seems to be the most crucial and the most effective for the case of videos. In this paper, we present a comparative study between two methods exploiting the temporal context for semantic video indexing. The proposed approaches use temporal information that is derived from two different sources: low-level content and semantic information. Our experiments on TRECVID'12 collection showed interesting results that confirm the usefulness of the temporal context and demonstrate which of the two approaches is more effective.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123303853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153602
Lukas Diem, M. Zaharieva
The immense amount of available video data poses novel requirements for video representation approaches by means of focusing on central and relevant aspects of the underlying story and facilitating the efficient overview and assessment of the content. In general, the assessment of content relevance and significance is a high-level task that usually requires for human intervention. However, some filming techniques imply importance and bear the potential for automated content-based analysis. For example, core elements in a movie (such as the main characters and central objects) are often emphasized by repeated occurrence. In this paper we present a new approach for the automated detection of such recurring elements in video sequences that provides a compact and interpretable content representation. Performed experiments outline the challenges and the potential of the algorithm for automated high-level video analysis.
{"title":"Interpretable video representation","authors":"Lukas Diem, M. Zaharieva","doi":"10.1109/CBMI.2015.7153602","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153602","url":null,"abstract":"The immense amount of available video data poses novel requirements for video representation approaches by means of focusing on central and relevant aspects of the underlying story and facilitating the efficient overview and assessment of the content. In general, the assessment of content relevance and significance is a high-level task that usually requires for human intervention. However, some filming techniques imply importance and bear the potential for automated content-based analysis. For example, core elements in a movie (such as the main characters and central objects) are often emphasized by repeated occurrence. In this paper we present a new approach for the automated detection of such recurring elements in video sequences that provides a compact and interpretable content representation. Performed experiments outline the challenges and the potential of the algorithm for automated high-level video analysis.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"23 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116518793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153610
Bahjat Safadi, G. Quénot
This paper presents a set of improvements for SVM-based large scale multimedia indexing. The proposed method is particularly suited for the detection of many target concepts at once and for highly imbalanced classes (very infrequent concepts). The method is based on the use of multiple SVMs (MSVM) for dealing with the class imbalance and on some adaptations of this approach in order to allow for an efficient implementation using optimized linear algebra routines. The implementation also involves hashed structures allowing the factorization of computations between the multiple SVMs and the multiple target concepts, and is denoted as Factorized-MSVM. Experiments were conducted on a large-scale dataset, namely TRECVid 2012 semantic indexing task. Results show that the Factorized-MSVM performs as well as the original MSVM, but it is significantly much faster. Speed-ups by factors of several hundreds were obtained for the simultaneous classification of 346 concepts, when compared to the original MSVM implementation using the popular libSVM implementation.
{"title":"A factorized model for multiple SVM and multi-label classification for large scale multimedia indexing","authors":"Bahjat Safadi, G. Quénot","doi":"10.1109/CBMI.2015.7153610","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153610","url":null,"abstract":"This paper presents a set of improvements for SVM-based large scale multimedia indexing. The proposed method is particularly suited for the detection of many target concepts at once and for highly imbalanced classes (very infrequent concepts). The method is based on the use of multiple SVMs (MSVM) for dealing with the class imbalance and on some adaptations of this approach in order to allow for an efficient implementation using optimized linear algebra routines. The implementation also involves hashed structures allowing the factorization of computations between the multiple SVMs and the multiple target concepts, and is denoted as Factorized-MSVM. Experiments were conducted on a large-scale dataset, namely TRECVid 2012 semantic indexing task. Results show that the Factorized-MSVM performs as well as the original MSVM, but it is significantly much faster. Speed-ups by factors of several hundreds were obtained for the simultaneous classification of 346 concepts, when compared to the original MSVM implementation using the popular libSVM implementation.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114664469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153634
Navid Rekabsaz, R. Bierig, B. Ionescu, A. Hanbury, M. Lupu
We revisit text-based image retrieval for social media, exploring the opportunities offered by statistical semantics. We assess the performance and limitation of several complementary corpus-based semantic text similarity methods in combination with word representations. We compare results with state-of-the-art text search engines. Our deep learning-based semantic retrieval methods show a statistically significant improvement in comparison to a best practice Solr search engine, at the expense of a significant increase in processing time. We provide a solution for reducing the semantic processing time up to 48% compared to the standard approach, while achieving the same performance.
{"title":"On the use of statistical semantics for metadata-based social image retrieval","authors":"Navid Rekabsaz, R. Bierig, B. Ionescu, A. Hanbury, M. Lupu","doi":"10.1109/CBMI.2015.7153634","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153634","url":null,"abstract":"We revisit text-based image retrieval for social media, exploring the opportunities offered by statistical semantics. We assess the performance and limitation of several complementary corpus-based semantic text similarity methods in combination with word representations. We compare results with state-of-the-art text search engines. Our deep learning-based semantic retrieval methods show a statistically significant improvement in comparison to a best practice Solr search engine, at the expense of a significant increase in processing time. We provide a solution for reducing the semantic processing time up to 48% compared to the standard approach, while achieving the same performance.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114820707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-10DOI: 10.1109/CBMI.2015.7153605
Hassan Wehbe, P. Joly, B. Haidar
In this paper we propose a method to locate inloop repetitions in a video. An in-loop repetition consists in repeating the same action(s) many times consecutively. The proposed method adapts and uses the auto-correlation method YIN, originally proposed to find the fundamental frequency in audio signals. Based on this technique, we propose a method that generates a matrix where repetitions correspond to triangle-shaped zones of low values in this matrix (we called YIN-Matrix). Locating these triangles leads to locate video segments that enclose a repetition as well as to extract their parameters. In order to evaluate our method, we used a standard evaluation method that shows the error rates compared to ground-truth information. According to this evaluation method, our method shows promising results that nominate it to form a solid base for future works.
{"title":"Automatic detection of repetitive actions in a video","authors":"Hassan Wehbe, P. Joly, B. Haidar","doi":"10.1109/CBMI.2015.7153605","DOIUrl":"https://doi.org/10.1109/CBMI.2015.7153605","url":null,"abstract":"In this paper we propose a method to locate inloop repetitions in a video. An in-loop repetition consists in repeating the same action(s) many times consecutively. The proposed method adapts and uses the auto-correlation method YIN, originally proposed to find the fundamental frequency in audio signals. Based on this technique, we propose a method that generates a matrix where repetitions correspond to triangle-shaped zones of low values in this matrix (we called YIN-Matrix). Locating these triangles leads to locate video segments that enclose a repetition as well as to extract their parameters. In order to evaluate our method, we used a standard evaluation method that shows the error rates compared to ground-truth information. According to this evaluation method, our method shows promising results that nominate it to form a solid base for future works.","PeriodicalId":387496,"journal":{"name":"2015 13th International Workshop on Content-Based Multimedia Indexing (CBMI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115207395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}