We propose a logo detection approach which utilizes the Haar (Haar-like) features computed directly from the gradient orientation, gradient magnitude channels and the gray intensity channel to effectively and efficiently extract discriminating features for a variety of logo images. The major contributions of this work are two-fold: 1) we explicitly demonstrate that, with an optimized design and implementation, the considerable discrimination can be obtained from the simple features like the Haar features which are extracted directly from the low level gradient orientation and magnitude channels, 2) we proposed an effective and efficient logo detection approach by using the Haar features obtained directly from gradient orientation, magnitude, and gray image channels. The experimental results on the collected merchandise images of Louis Vuitton (LV) and Polo Ralph Lauren (PRL) products show promising applicabilities of our approach.
本文提出了一种利用梯度方向、梯度幅度通道和灰度强度通道直接计算Haar(类Haar)特征的标志检测方法,对各种标志图像进行有效、高效的识别特征提取。本工作的主要贡献有两个方面:1)我们明确地证明,通过优化设计和实现,可以从直接从低级梯度方向和大小通道中提取的Haar特征等简单特征中获得相当大的识别能力;2)我们提出了一种有效的、高效的标识检测方法,利用直接从梯度方向、大小和灰度图像通道中获得的Haar特征。在LV (LV)和Polo Ralph Lauren (PRL)所收集的商品图像上的实验结果表明,我们的方法具有良好的适用性。
{"title":"Using Low Level Gradient Channels for Computationally Efficient Object Detection and Its Application in Logo Detection","authors":"Yu Chen, V. Thing","doi":"10.1109/ISM.2012.51","DOIUrl":"https://doi.org/10.1109/ISM.2012.51","url":null,"abstract":"We propose a logo detection approach which utilizes the Haar (Haar-like) features computed directly from the gradient orientation, gradient magnitude channels and the gray intensity channel to effectively and efficiently extract discriminating features for a variety of logo images. The major contributions of this work are two-fold: 1) we explicitly demonstrate that, with an optimized design and implementation, the considerable discrimination can be obtained from the simple features like the Haar features which are extracted directly from the low level gradient orientation and magnitude channels, 2) we proposed an effective and efficient logo detection approach by using the Haar features obtained directly from gradient orientation, magnitude, and gray image channels. The experimental results on the collected merchandise images of Louis Vuitton (LV) and Polo Ralph Lauren (PRL) products show promising applicabilities of our approach.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129760482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to overwhelming use of 3D models in video games and virtual environments, there is a growing interest in 3D scene generation, scene understanding and 3D model retrieval. In this paper, we introduce a data-driven 3D scene generation approach from a Maximum Entropy (MaxEnt) model selection perspective. Using this model selection criterion, new scenes can be sampled by matching a set of contextual constraints that are extracted from training and synthesized scenes. Starting from a set of random synthesized configurations of objects in 3D, the MaxEnt distribution is iteratively sampled (using Metropolis sampling) and updated until the constraints between training and synthesized scenes match, indicating the generation of plausible synthesized 3D scenes. To illustrate the proposed methodology, we use 3D training desk scenes that are all composed of seven predefined objects with different position, scale and orientation arrangements. After applying the MaxEnt framework, the synthesized scenes show that the proposed strategy can generate reasonably similar scenes to the training examples without any human supervision during sampling. We would like to mention, however, that such an approach is not limited to desk scene generation as described here and can be extended to any 3D scene generation problem.
{"title":"3D Scene Generation by Learning from Examples","authors":"Mesfin Dema, H. Sari-Sarraf","doi":"10.1109/ISM.2012.19","DOIUrl":"https://doi.org/10.1109/ISM.2012.19","url":null,"abstract":"Due to overwhelming use of 3D models in video games and virtual environments, there is a growing interest in 3D scene generation, scene understanding and 3D model retrieval. In this paper, we introduce a data-driven 3D scene generation approach from a Maximum Entropy (MaxEnt) model selection perspective. Using this model selection criterion, new scenes can be sampled by matching a set of contextual constraints that are extracted from training and synthesized scenes. Starting from a set of random synthesized configurations of objects in 3D, the MaxEnt distribution is iteratively sampled (using Metropolis sampling) and updated until the constraints between training and synthesized scenes match, indicating the generation of plausible synthesized 3D scenes. To illustrate the proposed methodology, we use 3D training desk scenes that are all composed of seven predefined objects with different position, scale and orientation arrangements. After applying the MaxEnt framework, the synthesized scenes show that the proposed strategy can generate reasonably similar scenes to the training examples without any human supervision during sampling. We would like to mention, however, that such an approach is not limited to desk scene generation as described here and can be extended to any 3D scene generation problem.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122071912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a comprehensive statistical study of makeup effect on facial parts (skin, eyes, and lip) is conducted first. According to the statistical study, a method to detect whether makeup is applied or not based on input facial image is proposed, then the makeup effect is further quantified as Young Index (YI) for female age estimation. An age estimator with makeup effect considered is presented in this paper. Results from the experiments find that with the makeup effect considered, the method proposed in this paper can improve accuracy by 0.9-6.7% in CS (Cumulative Score) and 0.26-9.76 in MAE (Mean of Absolute Errors between the estimated age and the ground truth age labeled or acquired from the data) comparing with other age estimation methods.
{"title":"Quantifying the Makeup Effect in Female Faces and Its Applications for Age Estimation","authors":"Ranran Feng, B. Prabhakaran","doi":"10.1109/ISM.2012.29","DOIUrl":"https://doi.org/10.1109/ISM.2012.29","url":null,"abstract":"In this paper, a comprehensive statistical study of makeup effect on facial parts (skin, eyes, and lip) is conducted first. According to the statistical study, a method to detect whether makeup is applied or not based on input facial image is proposed, then the makeup effect is further quantified as Young Index (YI) for female age estimation. An age estimator with makeup effect considered is presented in this paper. Results from the experiments find that with the makeup effect considered, the method proposed in this paper can improve accuracy by 0.9-6.7% in CS (Cumulative Score) and 0.26-9.76 in MAE (Mean of Absolute Errors between the estimated age and the ground truth age labeled or acquired from the data) comparing with other age estimation methods.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124591783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we propose a framework for a robust Content Based Video Retrieval (CBVR) system with free hand query sketches, using the Multi-Spectro Temporal-Curvature Scale Space (MST-CSS) representation. Our designed interface allows sketches to be drawn to depict the shape of the object in motion and its trajectory. We obtain the MST-CSS feature representation using these cues and match with a set of MST-CSS features generated offline from the video clips in the database (gallery). Results are displayed in rank ordered similarity. Experimentation with benchmark datasets shows promising results.
{"title":"A Motion-Sketch Based Video Retrieval Using MST-CSS Representation","authors":"C. Chattopadhyay, Sukhendu Das","doi":"10.1109/ISM.2012.76","DOIUrl":"https://doi.org/10.1109/ISM.2012.76","url":null,"abstract":"In this work, we propose a framework for a robust Content Based Video Retrieval (CBVR) system with free hand query sketches, using the Multi-Spectro Temporal-Curvature Scale Space (MST-CSS) representation. Our designed interface allows sketches to be drawn to depict the shape of the object in motion and its trajectory. We obtain the MST-CSS feature representation using these cues and match with a set of MST-CSS features generated offline from the video clips in the database (gallery). Results are displayed in rank ordered similarity. Experimentation with benchmark datasets shows promising results.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126537211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stereo correspondence is an ill-posed problem mainly due to matching ambiguity, which is especially serious in extreme cases where the corresponding relationship is unknown and can be very complicated. Mutual information (MI), which assumes no prior relationship on the matching pair, is a good solution to this problem. This paper proposes a context-aware mutual information and Markov Random Field (MRF) based approach with gradient information introduced into both the data term and the smoothness term of the MAP-MRF framework where such advanced techniques as graph cuts can be used to find an accurate disparity map. The results show that the proposed context-aware method outperforms non-MI and traditional MI-based methods both quantitatively and qualitatively in some extreme cases.
{"title":"Mutual Information Based Stereo Correspondence in Extreme Cases","authors":"Qing Tian, GuangJun Tian","doi":"10.1109/ISM.2012.46","DOIUrl":"https://doi.org/10.1109/ISM.2012.46","url":null,"abstract":"Stereo correspondence is an ill-posed problem mainly due to matching ambiguity, which is especially serious in extreme cases where the corresponding relationship is unknown and can be very complicated. Mutual information (MI), which assumes no prior relationship on the matching pair, is a good solution to this problem. This paper proposes a context-aware mutual information and Markov Random Field (MRF) based approach with gradient information introduced into both the data term and the smoothness term of the MAP-MRF framework where such advanced techniques as graph cuts can be used to find an accurate disparity map. The results show that the proposed context-aware method outperforms non-MI and traditional MI-based methods both quantitatively and qualitatively in some extreme cases.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126438175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tag-clouds are becoming extremely popular in multimedia community as media of exploration and expression. In this work, we take tag-cloud construction to a new level by allowing a tag-cloud to take any arbitrary shape while preserving some order of tags (here alphabetical). Our method guarantees non-overlap among words and ensures compact representation within specified shape. The experiments on a variety of input set of tags and shapes of the tag-clouds show that the proposed method is promising and has real-time performance. Finally, we show the applicability of our method with an application wherein the tag-clouds specific to places, people, and keywords are constructed and used for digital media selection within a social network domain.
{"title":"Tag Cloud++ - Scalable Tag Clouds for Arbitrary Layouts","authors":"Minwoo Park, D. Joshi, A. Loui","doi":"10.1109/ISM.2012.66","DOIUrl":"https://doi.org/10.1109/ISM.2012.66","url":null,"abstract":"Tag-clouds are becoming extremely popular in multimedia community as media of exploration and expression. In this work, we take tag-cloud construction to a new level by allowing a tag-cloud to take any arbitrary shape while preserving some order of tags (here alphabetical). Our method guarantees non-overlap among words and ensures compact representation within specified shape. The experiments on a variety of input set of tags and shapes of the tag-clouds show that the proposed method is promising and has real-time performance. Finally, we show the applicability of our method with an application wherein the tag-clouds specific to places, people, and keywords are constructed and used for digital media selection within a social network domain.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131977589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background subtraction is widely employed in the detection of moving objects when background does not show much dynamic behavior. Many background models have been proposed by researchers. Most of them analyses only temporal behavior of pixels and ignores spatial relations of neighborhood that may be a key to better separation of foreground from background when background has dynamic activities. To remedy, some researchers proposed spatio-temporal approaches usually in the block-based framework. Two recent reviews[1, 2] showed that temporal kernel density estimation(KDE) method and temporal Gaussian mixture model(GMM) perform about equally best among possible temporal background models. Spatio-temporal version of KDE was proposed. However, for GMM, explicit extension to spatio-temporal domain is not easily seen in the literature. In this paper, we propose an extension of GMM from temporal domain to spatio-temporal domain. We applied the methods to well known test sequences and found that the proposed outperforms the temporal GMM.
{"title":"Spatio-temporal Gaussian Mixture Model for Background Modeling","authors":"Y. Soh, Y. Hae, Intaek Kim","doi":"10.1109/ISM.2012.73","DOIUrl":"https://doi.org/10.1109/ISM.2012.73","url":null,"abstract":"Background subtraction is widely employed in the detection of moving objects when background does not show much dynamic behavior. Many background models have been proposed by researchers. Most of them analyses only temporal behavior of pixels and ignores spatial relations of neighborhood that may be a key to better separation of foreground from background when background has dynamic activities. To remedy, some researchers proposed spatio-temporal approaches usually in the block-based framework. Two recent reviews[1, 2] showed that temporal kernel density estimation(KDE) method and temporal Gaussian mixture model(GMM) perform about equally best among possible temporal background models. Spatio-temporal version of KDE was proposed. However, for GMM, explicit extension to spatio-temporal domain is not easily seen in the literature. In this paper, we propose an extension of GMM from temporal domain to spatio-temporal domain. We applied the methods to well known test sequences and found that the proposed outperforms the temporal GMM.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132073467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Michael Höver, Gundolf von Bachhaus, M. Hartle, M. Mühlhäuser
The production of lecture recordings is becoming increasingly important for university education and highly appreciated by students. However, those lecture recordings and corresponding systems are only a subset of different kinds of learning materials and learning tools that exist in learning environments. This demands for learning system designs that are easily accessible, extensible, and open for the integration with other environments, data sources, and user (inter-)actions. The contributions of this paper is as follows: we suggest a system that supports educators in presenting, recording, and providing their lectures as well as a system design following Linked Data principles to facilitate integration and users to interact with both each other and learning materials.
{"title":"DLH/CLLS: An Open, Extensible System Design for Prosuming Lecture Recordings and Integrating Multimedia Learning Ecosystems","authors":"Kai Michael Höver, Gundolf von Bachhaus, M. Hartle, M. Mühlhäuser","doi":"10.1109/ISM.2012.97","DOIUrl":"https://doi.org/10.1109/ISM.2012.97","url":null,"abstract":"The production of lecture recordings is becoming increasingly important for university education and highly appreciated by students. However, those lecture recordings and corresponding systems are only a subset of different kinds of learning materials and learning tools that exist in learning environments. This demands for learning system designs that are easily accessible, extensible, and open for the integration with other environments, data sources, and user (inter-)actions. The contributions of this paper is as follows: we suggest a system that supports educators in presenting, recording, and providing their lectures as well as a system design following Linked Data principles to facilitate integration and users to interact with both each other and learning materials.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127522687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a general model-based framework for multiview depth image enhancement is proposed. Depth imagery plays a pivotal role in emerging free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery by using a variational Bayesian inference framework. First, our approach classifies the color information in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further sub clustering. Finally, the resulting mean of the sub-clusters is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the quality of virtual views by up to 0.25 dB.
{"title":"A Variational Bayesian Inference Framework for Multiview Depth Image Enhancement","authors":"P. Rana, Jalil Taghia, M. Flierl","doi":"10.1109/ISM.2012.44","DOIUrl":"https://doi.org/10.1109/ISM.2012.44","url":null,"abstract":"In this paper, a general model-based framework for multiview depth image enhancement is proposed. Depth imagery plays a pivotal role in emerging free-viewpoint television. This technology requires high quality virtual view synthesis to enable viewers to move freely in a dynamic real world scene. Depth imagery of different viewpoints is used to synthesize an arbitrary number of novel views. Usually, the depth imagery is estimated individually by stereo-matching algorithms and, hence, shows lack of inter-view consistency. This inconsistency affects the quality of view synthesis negatively. This paper enhances the inter-view consistency of multiview depth imagery by using a variational Bayesian inference framework. First, our approach classifies the color information in the multiview color imagery. Second, using the resulting color clusters, we classify the corresponding depth values in the multiview depth imagery. Each clustered depth image is subject to further sub clustering. Finally, the resulting mean of the sub-clusters is used to enhance the depth imagery at multiple viewpoints. Experiments show that our approach improves the quality of virtual views by up to 0.25 dB.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133131722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lane detection algorithms constitute a basis for intelligent vehicle systems such as lane tracking and involuntary lane departure detection. In this paper, we propose a simple and video-based lane detection algorithm that uses a fast vanishing point estimation method. The first step of the algorithm is to extract and validate the line segments from the image with a recently proposed line detection algorithm. In the next step, an angle based elimination of line segments is done according to the perspective characteristics of lane markings. This basic operation removes many line segments that belong to irrelevant details on the scene and greatly reduces the number of features to be processed afterwards. Remaining line segments are extrapolated and superimposed to detect the image location where majority of the linear edge features converge. The location found by this efficient operation is assumed to be the vanishing point. Subsequently, an orientation-based removal is done by eliminating the line segments whose extensions do not intersect the vanishing point. The final step is clustering the remaining line segments such that each cluster represents a lane marking or a boundary of the road (i.e. sidewalks, barriers or shoulders). The properties of the line segments that constitute the clusters are fused to represent each cluster with a single line. The nearest two clusters to the vehicle are chosen as the lines that bound the lane that is being driven on. The proposed algorithm works in an average of 12 milliseconds for each frame with 640×480 resolution on a 2.20 GHz Intel CPU. This performance metric shows that the algorithm can be deployed on minimal hardware and still provide real-time performance.
{"title":"Video-Based Lane Detection Using a Fast Vanishing Point Estimation Method","authors":"Burak Benligiray, C. Topal, C. Akinlar","doi":"10.1109/ISM.2012.70","DOIUrl":"https://doi.org/10.1109/ISM.2012.70","url":null,"abstract":"Lane detection algorithms constitute a basis for intelligent vehicle systems such as lane tracking and involuntary lane departure detection. In this paper, we propose a simple and video-based lane detection algorithm that uses a fast vanishing point estimation method. The first step of the algorithm is to extract and validate the line segments from the image with a recently proposed line detection algorithm. In the next step, an angle based elimination of line segments is done according to the perspective characteristics of lane markings. This basic operation removes many line segments that belong to irrelevant details on the scene and greatly reduces the number of features to be processed afterwards. Remaining line segments are extrapolated and superimposed to detect the image location where majority of the linear edge features converge. The location found by this efficient operation is assumed to be the vanishing point. Subsequently, an orientation-based removal is done by eliminating the line segments whose extensions do not intersect the vanishing point. The final step is clustering the remaining line segments such that each cluster represents a lane marking or a boundary of the road (i.e. sidewalks, barriers or shoulders). The properties of the line segments that constitute the clusters are fused to represent each cluster with a single line. The nearest two clusters to the vehicle are chosen as the lines that bound the lane that is being driven on. The proposed algorithm works in an average of 12 milliseconds for each frame with 640×480 resolution on a 2.20 GHz Intel CPU. This performance metric shows that the algorithm can be deployed on minimal hardware and still provide real-time performance.","PeriodicalId":282528,"journal":{"name":"2012 IEEE International Symposium on Multimedia","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124620418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}