Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126544
Ethan Rublee, V. Rabaud, K. Konolige, G. Bradski
Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.
{"title":"ORB: An efficient alternative to SIFT or SURF","authors":"Ethan Rublee, V. Rabaud, K. Konolige, G. Bradski","doi":"10.1109/ICCV.2011.6126544","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126544","url":null,"abstract":"Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"25 1","pages":"2564-2571"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87290872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126260
Joseph Tighe, S. Lazebnik
This paper presents a framework for image parsing with multiple label sets. For example, we may want to simultaneously label every image region according to its basic-level object category (car, building, road, tree, etc.), superordinate category (animal, vehicle, manmade object, natural object, etc.), geometric orientation (horizontal, vertical, etc.), and material (metal, glass, wood, etc.). Some object regions may also be given part names (a car can have wheels, doors, windshield, etc.). We compute co-occurrence statistics between different label types of the same region to capture relationships such as “roads are horizontal,” “cars are made of metal,” “cars have wheels” but “horses have legs,” and so on. By incorporating these constraints into a Markov Random Field inference framework and jointly solving for all the label sets, we are able to improve the classification accuracy for all the label sets at once, achieving a richer form of image understanding.
{"title":"Understanding scenes on many levels","authors":"Joseph Tighe, S. Lazebnik","doi":"10.1109/ICCV.2011.6126260","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126260","url":null,"abstract":"This paper presents a framework for image parsing with multiple label sets. For example, we may want to simultaneously label every image region according to its basic-level object category (car, building, road, tree, etc.), superordinate category (animal, vehicle, manmade object, natural object, etc.), geometric orientation (horizontal, vertical, etc.), and material (metal, glass, wood, etc.). Some object regions may also be given part names (a car can have wheels, doors, windshield, etc.). We compute co-occurrence statistics between different label types of the same region to capture relationships such as “roads are horizontal,” “cars are made of metal,” “cars have wheels” but “horses have legs,” and so on. By incorporating these constraints into a Markov Random Field inference framework and jointly solving for all the label sets, we are able to improve the classification accuracy for all the label sets at once, achieving a richer form of image understanding.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"54 1","pages":"335-342"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73515628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126546
Yuning Chai, V. Lempitsky, Andrew Zisserman
The objective of this paper is the unsupervised segmentation of image training sets into foreground and background in order to improve image classification performance. To this end we introduce a new scalable, alternation-based algorithm for co-segmentation, BiCoS, which is simpler than many of its predecessors, and yet has superior performance on standard benchmark image datasets.
{"title":"BiCoS: A Bi-level co-segmentation method for image classification","authors":"Yuning Chai, V. Lempitsky, Andrew Zisserman","doi":"10.1109/ICCV.2011.6126546","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126546","url":null,"abstract":"The objective of this paper is the unsupervised segmentation of image training sets into foreground and background in order to improve image classification performance. To this end we introduce a new scalable, alternation-based algorithm for co-segmentation, BiCoS, which is simpler than many of its predecessors, and yet has superior performance on standard benchmark image datasets.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"14 1","pages":"2579-2586"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88670474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126314
Hongwen Kang, M. Hebert, T. Kanade
We propose an approach to identify and segment objects from scenes that a person (or robot) encounters in Activities of Daily Living (ADL). Images collected in those cluttered scenes contain multiple objects. Each image provides only a partial, possibly very different view of each object. An object instance discovery program must be able to link pieces of visual information from multiple images and extract the consistent patterns.
{"title":"Discovering object instances from scenes of Daily Living","authors":"Hongwen Kang, M. Hebert, T. Kanade","doi":"10.1109/ICCV.2011.6126314","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126314","url":null,"abstract":"We propose an approach to identify and segment objects from scenes that a person (or robot) encounters in Activities of Daily Living (ADL). Images collected in those cluttered scenes contain multiple objects. Each image provides only a partial, possibly very different view of each object. An object instance discovery program must be able to link pieces of visual information from multiple images and extract the consistent patterns.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"1 1","pages":"762-769"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88858698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126364
Xi Li, A. Dick, Hanzi Wang, Chunhua Shen, A. Hengel
Visual tracking has been typically solved as a binary classification problem. Most existing trackers only consider the pairwise interactions between samples, and thereby ignore the higher-order contextual interactions, which may lead to the sensitivity to complicated factors such as noises, outliers, background clutters and so on. In this paper, we propose a visual tracker based on support vector machines (SVMs), for which a novel graph mode-based contextual kernel is designed to effectively capture the higher-order contextual information from samples. To do so, we first create a visual graph whose similarity matrix is determined by a baseline visual kernel. Second, a set of high-order contexts are discovered in the visual graph. The problem of discovering these high-order contexts is solved by seeking modes of the visual graph. Each graph mode corresponds to a vertex community termed as a high-order context. Third, we construct a contextual kernel that effectively captures the interaction information between the high-order contexts. Finally, this contextual kernel is embedded into SVMs for robust tracking. Experimental results on challenging videos demonstrate the effectiveness and robustness of the proposed tracker.
{"title":"Graph mode-based contextual kernels for robust SVM tracking","authors":"Xi Li, A. Dick, Hanzi Wang, Chunhua Shen, A. Hengel","doi":"10.1109/ICCV.2011.6126364","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126364","url":null,"abstract":"Visual tracking has been typically solved as a binary classification problem. Most existing trackers only consider the pairwise interactions between samples, and thereby ignore the higher-order contextual interactions, which may lead to the sensitivity to complicated factors such as noises, outliers, background clutters and so on. In this paper, we propose a visual tracker based on support vector machines (SVMs), for which a novel graph mode-based contextual kernel is designed to effectively capture the higher-order contextual information from samples. To do so, we first create a visual graph whose similarity matrix is determined by a baseline visual kernel. Second, a set of high-order contexts are discovered in the visual graph. The problem of discovering these high-order contexts is solved by seeking modes of the visual graph. Each graph mode corresponds to a vertex community termed as a high-order context. Third, we construct a contextual kernel that effectively captures the interaction information between the high-order contexts. Finally, this contextual kernel is embedded into SVMs for robust tracking. Experimental results on challenging videos demonstrate the effectiveness and robustness of the proposed tracker.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"26 1","pages":"1156-1163"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89128675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126239
Gunhee Kim, E. Xing, Li Fei-Fei, T. Kanade
The saliency of regions or objects in an image can be significantly boosted if they recur in multiple images. Leveraging this idea, cosegmentation jointly segments common regions from multiple images. In this paper, we propose CoSand, a distributed cosegmentation approach for a highly variable large-scale image collection. The segmentation task is modeled by temperature maximization on anisotropic heat diffusion, of which the temperature maximization with finite K heat sources corresponds to a K-way segmentation that maximizes the segmentation confidence of every pixel in an image. We show that our method takes advantage of a strong theoretic property in that the temperature under linear anisotropic diffusion is a submodular function; therefore, a greedy algorithm guarantees at least a constant factor approximation to the optimal solution for temperature maximization. Our theoretic result is successfully applied to scalable cosegmentation as well as diversity ranking and single-image segmentation. We evaluate CoSand on MSRC and ImageNet datasets, and show its competence both in competitive performance over previous work, and in much superior scalability.
{"title":"Distributed cosegmentation via submodular optimization on anisotropic diffusion","authors":"Gunhee Kim, E. Xing, Li Fei-Fei, T. Kanade","doi":"10.1109/ICCV.2011.6126239","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126239","url":null,"abstract":"The saliency of regions or objects in an image can be significantly boosted if they recur in multiple images. Leveraging this idea, cosegmentation jointly segments common regions from multiple images. In this paper, we propose CoSand, a distributed cosegmentation approach for a highly variable large-scale image collection. The segmentation task is modeled by temperature maximization on anisotropic heat diffusion, of which the temperature maximization with finite K heat sources corresponds to a K-way segmentation that maximizes the segmentation confidence of every pixel in an image. We show that our method takes advantage of a strong theoretic property in that the temperature under linear anisotropic diffusion is a submodular function; therefore, a greedy algorithm guarantees at least a constant factor approximation to the optimal solution for temperature maximization. Our theoretic result is successfully applied to scalable cosegmentation as well as diversity ranking and single-image segmentation. We evaluate CoSand on MSRC and ImageNet datasets, and show its competence both in competitive performance over previous work, and in much superior scalability.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"16 1","pages":"169-176"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81399288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126411
Kent Fujiwara, K. Nishino, J. Takamatsu, Bo Zheng, K. Ikeuchi
We present a novel non-rigid surface registration method that achieves high accuracy and matches characteristic features without manual intervention. The key insight is to consider the entire shape as a collection of local structures that individually undergo rigid transformations to collectively deform the global structure. We realize this locally rigid but globally non-rigid surface registration with a newly derived dual-grid Free-form Deformation (FFD) framework. We first represent the source and target shapes with their signed distance fields (SDF). We then superimpose a sampling grid onto a conventional FFD grid that is dual to the control points. Each control point is then iteratively translated by a rigid transformation that minimizes the difference between two SDFs within the corresponding sampling region. The translated control points then interpolate the embedding space within the FFD grid and determine the overall deformation. The experimental results clearly demonstrate that our method is capable of overcoming the difficulty of preserving and matching local features.
{"title":"Locally rigid globally non-rigid surface registration","authors":"Kent Fujiwara, K. Nishino, J. Takamatsu, Bo Zheng, K. Ikeuchi","doi":"10.1109/ICCV.2011.6126411","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126411","url":null,"abstract":"We present a novel non-rigid surface registration method that achieves high accuracy and matches characteristic features without manual intervention. The key insight is to consider the entire shape as a collection of local structures that individually undergo rigid transformations to collectively deform the global structure. We realize this locally rigid but globally non-rigid surface registration with a newly derived dual-grid Free-form Deformation (FFD) framework. We first represent the source and target shapes with their signed distance fields (SDF). We then superimpose a sampling grid onto a conventional FFD grid that is dual to the control points. Each control point is then iteratively translated by a rigid transformation that minimizes the difference between two SDFs within the corresponding sampling region. The translated control points then interpolate the embedding space within the FFD grid and determine the overall deformation. The experimental results clearly demonstrate that our method is capable of overcoming the difficulty of preserving and matching local features.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"100 1","pages":"1527-1534"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84980846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126460
R. Sagawa, Hiroshi Kawasaki, S. Kiyota, Furukawa Ryo
3D scanning of moving objects has many applications, for example, marker-less motion capture, analysis on fluid dynamics, object explosion and so on. One of the approach to acquire accurate shape is a projector-camera system, especially the methods that reconstructs a shape by using a single image with static pattern is suitable for capturing fast moving object. In this paper, we propose a method that uses a grid pattern consisting of sets of parallel lines. The pattern is spatially encoded by a periodic color pattern. While informations are sparse in the camera image, the proposed method extracts the dense (pixel-wise) phase informations from the sparse pattern. As the result, continuous regions in the camera images can be extracted by analyzing the phase. Since there remain one DOF for each region, we propose the linear solution to eliminate the DOF by using geometric informations of the devices, i.e. epipolar constraint. In addition, solution space is finite because projected pattern consists of parallel lines with same intervals, the linear equation can be efficiently solved by integer least square method. In this paper, the formulations for both single and multiple projectors are presented. We evaluated the accuracy of correspondences and showed the comparison with respect to the number of projectors by simulation. Finally, the dense 3D reconstruction of moving objects are presented in the experiments.
{"title":"Dense one-shot 3D reconstruction by detecting continuous regions with parallel line projection","authors":"R. Sagawa, Hiroshi Kawasaki, S. Kiyota, Furukawa Ryo","doi":"10.1109/ICCV.2011.6126460","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126460","url":null,"abstract":"3D scanning of moving objects has many applications, for example, marker-less motion capture, analysis on fluid dynamics, object explosion and so on. One of the approach to acquire accurate shape is a projector-camera system, especially the methods that reconstructs a shape by using a single image with static pattern is suitable for capturing fast moving object. In this paper, we propose a method that uses a grid pattern consisting of sets of parallel lines. The pattern is spatially encoded by a periodic color pattern. While informations are sparse in the camera image, the proposed method extracts the dense (pixel-wise) phase informations from the sparse pattern. As the result, continuous regions in the camera images can be extracted by analyzing the phase. Since there remain one DOF for each region, we propose the linear solution to eliminate the DOF by using geometric informations of the devices, i.e. epipolar constraint. In addition, solution space is finite because projected pattern consists of parallel lines with same intervals, the linear equation can be efficiently solved by integer least square method. In this paper, the formulations for both single and multiple projectors are presented. We evaluated the accuracy of correspondences and showed the comparison with respect to the number of projectors by simulation. Finally, the dense 3D reconstruction of moving objects are presented in the experiments.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"76 1","pages":"1911-1918"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89678538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126283
Devi Parikh
The performance of current state-of-the-art computer vision algorithms at image classification falls significantly short as compared to human abilities. To reduce this gap, it is important for the community to know what problems to solve, and not just how to solve them. Towards this goal, via the use of jumbled images, we strip apart two widely investigated aspects: local and global information in images, and identify the performance bottleneck. Interestingly, humans have been shown to reliably recognize jumbled images. The goal of our paper is to determine a functional model that mimics how humans recognize jumbled images i.e. exploit local information alone, and further evaluate if existing implementations of this computational model suffice to match human performance. Surprisingly, in our series of human studies and machine experiments, we find that a simple bag-of-words based majority-vote-like strategy is an accurate functional model of how humans recognize jumbled images. Moreover, a straightforward machine implementation of this model achieves accuracies similar to human subjects at classifying jumbled images. This indicates that perhaps existing machine vision techniques already leverage local information from images effectively, and future research efforts should be focused on more advanced modeling of global information.
{"title":"Recognizing jumbled images: The role of local and global information in image classification","authors":"Devi Parikh","doi":"10.1109/ICCV.2011.6126283","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126283","url":null,"abstract":"The performance of current state-of-the-art computer vision algorithms at image classification falls significantly short as compared to human abilities. To reduce this gap, it is important for the community to know what problems to solve, and not just how to solve them. Towards this goal, via the use of jumbled images, we strip apart two widely investigated aspects: local and global information in images, and identify the performance bottleneck. Interestingly, humans have been shown to reliably recognize jumbled images. The goal of our paper is to determine a functional model that mimics how humans recognize jumbled images i.e. exploit local information alone, and further evaluate if existing implementations of this computational model suffice to match human performance. Surprisingly, in our series of human studies and machine experiments, we find that a simple bag-of-words based majority-vote-like strategy is an accurate functional model of how humans recognize jumbled images. Moreover, a straightforward machine implementation of this model achieves accuracies similar to human subjects at classifying jumbled images. This indicates that perhaps existing machine vision techniques already leverage local information from images effectively, and future research efforts should be focused on more advanced modeling of global information.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"196 1","pages":"519-526"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79855633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-06DOI: 10.1109/ICCV.2011.6126532
João F. Henriques, Rui Caseiro, Jorge P. Batista
Multiple object tracking has been formulated recently as a global optimization problem, and solved efficiently with optimal methods such as the Hungarian Algorithm. A severe limitation is the inability to model multiple objects that are merged into a single measurement, and track them as a group, while retaining optimality. This work presents a new graph structure that encodes these multiple-match events as standard one-to-one matches, allowing computation of the solution in polynomial time. Since identities are lost when objects merge, an efficient method to identify groups is also presented, as a flow circulation problem. The problem of tracking individual objects across groups is then posed as a standard optimal assignment. Experiments show increased performance on the PETS 2006 and 2009 datasets compared to state-of-the-art algorithms.
{"title":"Globally optimal solution to multi-object tracking with merged measurements","authors":"João F. Henriques, Rui Caseiro, Jorge P. Batista","doi":"10.1109/ICCV.2011.6126532","DOIUrl":"https://doi.org/10.1109/ICCV.2011.6126532","url":null,"abstract":"Multiple object tracking has been formulated recently as a global optimization problem, and solved efficiently with optimal methods such as the Hungarian Algorithm. A severe limitation is the inability to model multiple objects that are merged into a single measurement, and track them as a group, while retaining optimality. This work presents a new graph structure that encodes these multiple-match events as standard one-to-one matches, allowing computation of the solution in polynomial time. Since identities are lost when objects merge, an efficient method to identify groups is also presented, as a flow circulation problem. The problem of tracking individual objects across groups is then posed as a standard optimal assignment. Experiments show increased performance on the PETS 2006 and 2009 datasets compared to state-of-the-art algorithms.","PeriodicalId":6391,"journal":{"name":"2011 International Conference on Computer Vision","volume":"13 1","pages":"2470-2477"},"PeriodicalIF":0.0,"publicationDate":"2011-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80244644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}