Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711532
Hongwen Kang, M. Hebert, T. Kanade
In this paper we propose an image indexing and matching algorithm that relies on selecting distinctive high dimensional features. In contrast with conventional techniques that treated all features equally, we claim that one can benefit significantly from focusing on distinctive features. We propose a bag-of-words algorithm that combines the feature distinctiveness in visual vocabulary generation. Our approach compares favorably with the state of the art in image matching tasks on the University of Kentucky Recognition Benchmark dataset and on an indoor localization dataset. We also show that our approach scales up more gracefully on a large scale Flickr dataset.
{"title":"Image matching with distinctive visual vocabulary","authors":"Hongwen Kang, M. Hebert, T. Kanade","doi":"10.1109/WACV.2011.5711532","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711532","url":null,"abstract":"In this paper we propose an image indexing and matching algorithm that relies on selecting distinctive high dimensional features. In contrast with conventional techniques that treated all features equally, we claim that one can benefit significantly from focusing on distinctive features. We propose a bag-of-words algorithm that combines the feature distinctiveness in visual vocabulary generation. Our approach compares favorably with the state of the art in image matching tasks on the University of Kentucky Recognition Benchmark dataset and on an indoor localization dataset. We also show that our approach scales up more gracefully on a large scale Flickr dataset.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122253876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711522
N. Larios, Junyuan Lin, Mengzi Zhang, D. Lytle, A. Moldenke, L. Shapiro, Thomas G. Dietterich
The combination of local features, complementary feature types, and relative position information has been successfully applied to many object-class recognition tasks. Stacking is a common classification approach that combines the results from multiple classifiers, having the added benefit of allowing each classifier to handle a different feature space. However, the standard stacking method by its own nature discards any spatial information contained in the features, because only the combination of raw classification scores are input to the final classifier. The object-class recognition method proposed in this paper combines different feature types in a new stacking framework that efficiently quantizes input data and boosts classification accuracy, while allowing the use of spatial information. This classification method is applied to the task of automated insect-species identification for biomonitoring purposes. The test data set for this work contains 4722 images with 29 insect species, belonging to the three most common orders used to measure stream water quality, several of which are closely related and very difficult to distinguish. The specimens are in different 3D positions, different orientations, and different developmental and degradation stages with wide intra-class variation. On this very challenging data set, our new algorithm outperforms other classifiers, showing the benefits of using spatial information in the stacking framework with multiple dissimilar feature types.
{"title":"Stacked spatial-pyramid kernel: An object-class recognition method to combine scores from random trees","authors":"N. Larios, Junyuan Lin, Mengzi Zhang, D. Lytle, A. Moldenke, L. Shapiro, Thomas G. Dietterich","doi":"10.1109/WACV.2011.5711522","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711522","url":null,"abstract":"The combination of local features, complementary feature types, and relative position information has been successfully applied to many object-class recognition tasks. Stacking is a common classification approach that combines the results from multiple classifiers, having the added benefit of allowing each classifier to handle a different feature space. However, the standard stacking method by its own nature discards any spatial information contained in the features, because only the combination of raw classification scores are input to the final classifier. The object-class recognition method proposed in this paper combines different feature types in a new stacking framework that efficiently quantizes input data and boosts classification accuracy, while allowing the use of spatial information. This classification method is applied to the task of automated insect-species identification for biomonitoring purposes. The test data set for this work contains 4722 images with 29 insect species, belonging to the three most common orders used to measure stream water quality, several of which are closely related and very difficult to distinguish. The specimens are in different 3D positions, different orientations, and different developmental and degradation stages with wide intra-class variation. On this very challenging data set, our new algorithm outperforms other classifiers, showing the benefits of using spatial information in the stacking framework with multiple dissimilar feature types.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123788029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711509
Hossein Nejati, T. Sim
Face sketches are being used in eyewitness testimonies for about a century. These sketches are crucial in finding suspects when no photo is available, but a mental image in the eyewitness's mind. However, research shows that current procedures used for eyewitness testimonies have two main problems. First, they can significantly disturb the memories of the eyewitness. Second, in many cases, these procedures result in face images far from their target faces. These two problems are related to the plasticity of the human visual system and the differences between face perception in humans (holistic) and current methods of sketch production (piecemeal). In this paper, we present some insights for more realistic sketch to photo matching. We describe how to retrieve identity specific information from crude sketches, directly drawn by the non-artistic eyewitnesses. The sketches we used merely contain facial component outlines and facial marks (e.g. wrinkles and moles). We compare results of automatically matching two types sketches (trace-over and user-provided, 25 each) to four types of faces (original, locally exaggerated, configurally exaggerated, and globally exaggerated, 249 each), using two methods (PDM distance comparison and PCA classification). Based on our results, we argue that for automatic non-artistic sketch to photo matching, the algorithms should compare the user-provided sketches with globally exaggerated faces, with a soft constraint on facial marks, to achieve the best matching rates. This is because the user-provided sketch from the user's mental image, seems to be caricatured both locally and configurally.
{"title":"A study on recognizing non-artistic face sketches","authors":"Hossein Nejati, T. Sim","doi":"10.1109/WACV.2011.5711509","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711509","url":null,"abstract":"Face sketches are being used in eyewitness testimonies for about a century. These sketches are crucial in finding suspects when no photo is available, but a mental image in the eyewitness's mind. However, research shows that current procedures used for eyewitness testimonies have two main problems. First, they can significantly disturb the memories of the eyewitness. Second, in many cases, these procedures result in face images far from their target faces. These two problems are related to the plasticity of the human visual system and the differences between face perception in humans (holistic) and current methods of sketch production (piecemeal). In this paper, we present some insights for more realistic sketch to photo matching. We describe how to retrieve identity specific information from crude sketches, directly drawn by the non-artistic eyewitnesses. The sketches we used merely contain facial component outlines and facial marks (e.g. wrinkles and moles). We compare results of automatically matching two types sketches (trace-over and user-provided, 25 each) to four types of faces (original, locally exaggerated, configurally exaggerated, and globally exaggerated, 249 each), using two methods (PDM distance comparison and PCA classification). Based on our results, we argue that for automatic non-artistic sketch to photo matching, the algorithms should compare the user-provided sketches with globally exaggerated faces, with a soft constraint on facial marks, to achieve the best matching rates. This is because the user-provided sketch from the user's mental image, seems to be caricatured both locally and configurally.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128124705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711543
Matei Stroila, J. Mays, Bill Gale, Jeff Bach
We introduce a new class of mobile augmented reality navigation applications that allow people to interact with transit maps in public transit stations and vehicles. Our system consists of a database of coded transit maps, a vision engine for recognizing and tracking planar objects, and a graphics engine to overlay relevant real-time navigation information, such as the user's current location and the time to destination. We demonstrate this system with a prototype application built from open source components only. The application runs on a Nokia N900 mobile phone equipped with Maemo, a Debian Linux-based operating system. We use the OpenCV library and the new Frankencamera API for the vision engine. The application is written using the LGPL licensed Qt C++ Framework.
{"title":"Augmented transit maps","authors":"Matei Stroila, J. Mays, Bill Gale, Jeff Bach","doi":"10.1109/WACV.2011.5711543","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711543","url":null,"abstract":"We introduce a new class of mobile augmented reality navigation applications that allow people to interact with transit maps in public transit stations and vehicles. Our system consists of a database of coded transit maps, a vision engine for recognizing and tracking planar objects, and a graphics engine to overlay relevant real-time navigation information, such as the user's current location and the time to destination. We demonstrate this system with a prototype application built from open source components only. The application runs on a Nokia N900 mobile phone equipped with Maemo, a Debian Linux-based operating system. We use the OpenCV library and the new Frankencamera API for the vision engine. The application is written using the LGPL licensed Qt C++ Framework.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128373935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711499
T. Vikram, M. Tscherepanow, B. Wrede
In this article, we propose a bottom-up saliency model which works on capturing the contrast between random pixels in an image. The model is explained on the basis of the stimulus bias between two given stimuli (pixel intensity values) in an image and has a minimal set of tunable parameters. The methodology does not require any training bases or priors. We followed an established experimental setting and obtained state-of-the-art-results for salient region detection on the MSR dataset. Further experiments demonstrate that our method is robust to noise and has, in comparison to six other state-of-the-art models, a consistent performance in terms of recall, precision and F-measure.
{"title":"A random center surround bottom up visual attention model useful for salient region detection","authors":"T. Vikram, M. Tscherepanow, B. Wrede","doi":"10.1109/WACV.2011.5711499","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711499","url":null,"abstract":"In this article, we propose a bottom-up saliency model which works on capturing the contrast between random pixels in an image. The model is explained on the basis of the stimulus bias between two given stimuli (pixel intensity values) in an image and has a minimal set of tunable parameters. The methodology does not require any training bases or priors. We followed an established experimental setting and obtained state-of-the-art-results for salient region detection on the MSR dataset. Further experiments demonstrate that our method is robust to noise and has, in comparison to six other state-of-the-art models, a consistent performance in terms of recall, precision and F-measure.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134205774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711501
Jeremiah R. Barr, K. Bowyer, P. Flynn
We introduce the questionable observer detection problem: Given a collection of videos of crowds, determine which individuals appear unusually often across the set of videos. The algorithm proposed here detects these individuals by clustering sequences of face images. To provide robustness to sensor noise, facial expression and resolution variations, blur, and intermittent occlusions, we merge similar face image sequences from the same video and discard outlying face patterns prior to clustering. We present experiments on a challenging video dataset. The results show that the proposed method can surpass the performance of a clustering algorithm based on the VeriLook face recognition software by Neurotechnology both in terms of the detection rate and the false detection frequency.
{"title":"Detecting questionable observers using face track clustering","authors":"Jeremiah R. Barr, K. Bowyer, P. Flynn","doi":"10.1109/WACV.2011.5711501","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711501","url":null,"abstract":"We introduce the questionable observer detection problem: Given a collection of videos of crowds, determine which individuals appear unusually often across the set of videos. The algorithm proposed here detects these individuals by clustering sequences of face images. To provide robustness to sensor noise, facial expression and resolution variations, blur, and intermittent occlusions, we merge similar face image sequences from the same video and discard outlying face patterns prior to clustering. We present experiments on a challenging video dataset. The results show that the proposed method can surpass the performance of a clustering algorithm based on the VeriLook face recognition software by Neurotechnology both in terms of the detection rate and the false detection frequency.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122921507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711538
Kevin L. Veon, M. Mahoor
This paper describes a novel approach to pattern classification that combines Parzen window and support vector machines. Pattern classification is usually performed in universes where all possible categories are defined. Most of the current supervised learning classification techniques do not account for undefined categories. In a universe that is only partially defined, there may be objects that do not fall into the known set of categories. It would be a mistake to always classify these objects as a known category. We propose a Parzen window-based approach which is capable of classifying an object as not belonging to a known class. In our approach we use a Parzen window to identify local neighbors of a test point and train a localized support vector machine on the identified neighbors. Visual category recognition experiments are performed to compare the results of our approach, localized support vector machines using a k-nearest neighbors approach, and global support vector machines. Our experiments show that our Parzen window approach has superior results when testing with incomplete sets, and comparable results when testing with complete sets.
{"title":"Localized support vector machines using Parzen window for incomplete sets of categories","authors":"Kevin L. Veon, M. Mahoor","doi":"10.1109/WACV.2011.5711538","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711538","url":null,"abstract":"This paper describes a novel approach to pattern classification that combines Parzen window and support vector machines. Pattern classification is usually performed in universes where all possible categories are defined. Most of the current supervised learning classification techniques do not account for undefined categories. In a universe that is only partially defined, there may be objects that do not fall into the known set of categories. It would be a mistake to always classify these objects as a known category. We propose a Parzen window-based approach which is capable of classifying an object as not belonging to a known class. In our approach we use a Parzen window to identify local neighbors of a test point and train a localized support vector machine on the identified neighbors. Visual category recognition experiments are performed to compare the results of our approach, localized support vector machines using a k-nearest neighbors approach, and global support vector machines. Our experiments show that our Parzen window approach has superior results when testing with incomplete sets, and comparable results when testing with complete sets.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127042196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711527
S. Sedai, D. Huynh, Bennamoun
In this paper, we propose a hybrid method that combines supervised learning and particle filtering to track the 2D pose of a human subject in monocular video sequences. Our approach, which we call a supervised particle filter method, consists of two steps: the training step and the tracking step. In the training step, we use a supervised learning method to train the regressors that take the silhouette descriptors as input and produce the 2D poses as output. In the tracking step, the output pose estimated from the regressors is combined with the particle filter to track the 2D pose in each video frame. Unlike the particle filter, our method does not require any manual initialization. We have tested our approach using the HumanEva video datasets and compared it with the standard particle filter and 2D pose estimation on individual frames. Our experimental results show that our approach can successfully track the pose over long video sequences and that it gives more accurate 2D human pose tracking than the particle filter and 2D pose estimation.
{"title":"Supervised particle filter for tracking 2D human pose in monocular video","authors":"S. Sedai, D. Huynh, Bennamoun","doi":"10.1109/WACV.2011.5711527","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711527","url":null,"abstract":"In this paper, we propose a hybrid method that combines supervised learning and particle filtering to track the 2D pose of a human subject in monocular video sequences. Our approach, which we call a supervised particle filter method, consists of two steps: the training step and the tracking step. In the training step, we use a supervised learning method to train the regressors that take the silhouette descriptors as input and produce the 2D poses as output. In the tracking step, the output pose estimated from the regressors is combined with the particle filter to track the 2D pose in each video frame. Unlike the particle filter, our method does not require any manual initialization. We have tested our approach using the HumanEva video datasets and compared it with the standard particle filter and 2D pose estimation on individual frames. Our experimental results show that our approach can successfully track the pose over long video sequences and that it gives more accurate 2D human pose tracking than the particle filter and 2D pose estimation.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127298903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711537
Ying Li, Sharath Pankanti
In this paper, we first present the architecture of an intelligent headlight control (IHC) system that we developed in our earlier work. This IHC system aims to automatically control a vehicle's beam state (high beam or low beam) during a night-time drive. A three-level decision framework built around a support vector machine (SVM) learning engine is then briefly discussed. Next, we switch our focus to the study of system performance by varying the SVM feature set, as well as by exploiting various SVM training options and adjustments through a set of experiments. We believe that what we learned from this performance study can provide readers useful guidelines on extracting effective SVM features within the IHC problem domain, as well as on training an effective SVM learning engine for more generalized applications.
{"title":"A performance study of an intelligent headlight control system","authors":"Ying Li, Sharath Pankanti","doi":"10.1109/WACV.2011.5711537","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711537","url":null,"abstract":"In this paper, we first present the architecture of an intelligent headlight control (IHC) system that we developed in our earlier work. This IHC system aims to automatically control a vehicle's beam state (high beam or low beam) during a night-time drive. A three-level decision framework built around a support vector machine (SVM) learning engine is then briefly discussed. Next, we switch our focus to the study of system performance by varying the SVM feature set, as well as by exploiting various SVM training options and adjustments through a set of experiments. We believe that what we learned from this performance study can provide readers useful guidelines on extracting effective SVM features within the IHC problem domain, as well as on training an effective SVM learning engine for more generalized applications.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131540033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-05DOI: 10.1109/WACV.2011.5711556
Nathan Graves, S. Newsam
We describe methods for estimating the coefficient of atmospheric light extinction using visibility cameras. We use a standard haze image formation model to estimate atmospheric transmission using local contrast features as well as a recently proposed dark channel prior. A log-linear model is then used to relate transmission and extinction. We train and evaluate our model using an extensive set of ground truth images acquired over a year long period from two visibility cameras in the Phoenix, Arizona region. We present informative results which are particularly accurate for a visibility index used in long-term haze studies.
{"title":"Using visibility cameras to estimate atmospheric light extinction","authors":"Nathan Graves, S. Newsam","doi":"10.1109/WACV.2011.5711556","DOIUrl":"https://doi.org/10.1109/WACV.2011.5711556","url":null,"abstract":"We describe methods for estimating the coefficient of atmospheric light extinction using visibility cameras. We use a standard haze image formation model to estimate atmospheric transmission using local contrast features as well as a recently proposed dark channel prior. A log-linear model is then used to relate transmission and extinction. We train and evaluate our model using an extensive set of ground truth images acquired over a year long period from two visibility cameras in the Phoenix, Arizona region. We present informative results which are particularly accurate for a visibility index used in long-term haze studies.","PeriodicalId":424724,"journal":{"name":"2011 IEEE Workshop on Applications of Computer Vision (WACV)","volume":"59 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131873945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}