Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570892
J. Black, Dimitrios Makris, T. Ellis
Multi view tracking systems enable an object's identity to be preserved as it moves through a wide area surveillance network of cameras. One limitation of these systems is an inability to track objects between blind regions, i.e. pans of the scene that are not observable by the network of cameras. Recent interest has been shown in blind region learning and tracking but not much work has been reported on the systematic performance evaluation of these algorithms. The main contribution of this paper is to define a set of novel techniques that can be employed to validate a camera topology model, and a blind region multi view tracking algorithm.
{"title":"Validation of blind region learning and tracking","authors":"J. Black, Dimitrios Makris, T. Ellis","doi":"10.1109/VSPETS.2005.1570892","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570892","url":null,"abstract":"Multi view tracking systems enable an object's identity to be preserved as it moves through a wide area surveillance network of cameras. One limitation of these systems is an inability to track objects between blind regions, i.e. pans of the scene that are not observable by the network of cameras. Recent interest has been shown in blind region learning and tracking but not much work has been reported on the systematic performance evaluation of these algorithms. The main contribution of this paper is to define a set of novel techniques that can be employed to validate a camera topology model, and a blind region multi view tracking algorithm.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114824111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570925
Masayuki Yokoyama, T. Poggio
We propose a fast and robust approach to the detection and tracking of moving objects. Our method is based on using lines computed by a gradient-based optical flow and an edge detector. While it is known among researchers that gradient-based optical flow and edges are well matched for accurate computation of velocity, not much attention is paid to creating systems for detecting and tracking objects using this feature. In our method, extracted edges by using optical flow and the edge detector are restored as lines, and background lines of the previous frame are subtracted. Contours of objects are obtained by using snakes to clustered lines. Detected objects are tracked, and each tracked object has a state for handling occlusion and interference. The experimental results on outdoor-scenes show fast and robust performance of our method. The computation time of our method is 0.089 s/frame on a 900 MHz processor.
{"title":"A Contour-Based Moving Object Detection and Tracking","authors":"Masayuki Yokoyama, T. Poggio","doi":"10.1109/VSPETS.2005.1570925","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570925","url":null,"abstract":"We propose a fast and robust approach to the detection and tracking of moving objects. Our method is based on using lines computed by a gradient-based optical flow and an edge detector. While it is known among researchers that gradient-based optical flow and edges are well matched for accurate computation of velocity, not much attention is paid to creating systems for detecting and tracking objects using this feature. In our method, extracted edges by using optical flow and the edge detector are restored as lines, and background lines of the previous frame are subtracted. Contours of objects are obtained by using snakes to clustered lines. Detected objects are tracked, and each tracked object has a state for handling occlusion and interference. The experimental results on outdoor-scenes show fast and robust performance of our method. The computation time of our method is 0.089 s/frame on a 900 MHz processor.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115837577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570932
E. Grossmann, A. Kale, C. Jaynes
Ground truth segmentation of foreground and background is important for performance evaluation of existing techniques and can guide principled development of video analysis algorithms. Unfortunately, generating ground truth data is a cumbersome and incurs a high cost in human labor. In this paper, we propose an interactive method to produce foreground/background segmentation of video sequences captured by a stationary camera, that requires comparatively little human labor, while still producing high quality results. Given a sequence, the user indicates, with a few clicks in a GUI, a few rectangular regions that contain only foreground or background pixels. Adaboost then builds a classifier that combines the output of a set of weak classifiers. The resulting classifier is run on the remainder of the sequence. Based on the results and the accuracy requirements, the user can then select more example regions for training. This cycle of hand-labeling, training and automatic classification steps leads to a high-quality segmentation with little effort. Our experiments show promising results, raise new issues and provide some insight on possible improvements.
{"title":"Towards Interactive Generation of \"Ground-truth\" in Background Subtraction from Partially Labeled Examples","authors":"E. Grossmann, A. Kale, C. Jaynes","doi":"10.1109/VSPETS.2005.1570932","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570932","url":null,"abstract":"Ground truth segmentation of foreground and background is important for performance evaluation of existing techniques and can guide principled development of video analysis algorithms. Unfortunately, generating ground truth data is a cumbersome and incurs a high cost in human labor. In this paper, we propose an interactive method to produce foreground/background segmentation of video sequences captured by a stationary camera, that requires comparatively little human labor, while still producing high quality results. Given a sequence, the user indicates, with a few clicks in a GUI, a few rectangular regions that contain only foreground or background pixels. Adaboost then builds a classifier that combines the output of a set of weak classifiers. The resulting classifier is run on the remainder of the sequence. Based on the results and the accuracy requirements, the user can then select more example regions for training. This cycle of hand-labeling, training and automatic classification steps leads to a high-quality segmentation with little effort. Our experiments show promising results, raise new issues and provide some insight on possible improvements.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115407434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570901
Yanlin Guol, H. Sawhney, Rakesh Kumar, Steve Hsu
Tracking objects over a long period of time in realistic environments remains a challenging problem for ground and aerial video surveillance. Matching objects and verifying their identities across multiple spatial and temporal gaps proves to be an effective way to extend tracking range. When an object track is lost due to occlusion or other reasons, we need to learn the object signature and use it to confirm the object's identity against a set of active objects when it appears again. In order to deal with poor image quality and large variations in aerial video tracking, we present in this paper a unified framework that employs a heterogeneous collection of features such as lines, points and regions for robust vehicle matching under variations in illumination, aspect and camera poses. Our approach fully utilizes the characteristics of vehicular objects that consist of relatively large textureless areas delimited by line like features, and demonstrates the important usage of heterogeneous features for different stages of vehicle matching. Experiments demonstrate the enhancement in performance of vehicle identification across multiple sightings using the heterogeneous feature set.
{"title":"Robust object matching for persistent tracking with heterogeneous features","authors":"Yanlin Guol, H. Sawhney, Rakesh Kumar, Steve Hsu","doi":"10.1109/VSPETS.2005.1570901","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570901","url":null,"abstract":"Tracking objects over a long period of time in realistic environments remains a challenging problem for ground and aerial video surveillance. Matching objects and verifying their identities across multiple spatial and temporal gaps proves to be an effective way to extend tracking range. When an object track is lost due to occlusion or other reasons, we need to learn the object signature and use it to confirm the object's identity against a set of active objects when it appears again. In order to deal with poor image quality and large variations in aerial video tracking, we present in this paper a unified framework that employs a heterogeneous collection of features such as lines, points and regions for robust vehicle matching under variations in illumination, aspect and camera poses. Our approach fully utilizes the characteristics of vehicular objects that consist of relatively large textureless areas delimited by line like features, and demonstrates the important usage of heterogeneous features for different stages of vehicle matching. Experiments demonstrate the enhancement in performance of vehicle identification across multiple sightings using the heterogeneous feature set.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"2010 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128212280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570898
Lun Xin, T. Tan
There is an increasing interest in semantic analysis of events in dynamic scenes in recent years, and many different methods have been reported for this challenging problem. A new approach towards event modeling and analysis with semantic representations is proposed in this paper. Our method is inspired by the entity-relation model in software engineering. It integrates all related information into a hierarchical conceptual model by the name of ontology, and defines events as significant changes and mappings of conceptual units in the mode. All concepts are represented by three basic components, an entity, a word, and a set of attributes. The lower level of our framework achieves the task of feature extraction, and in the upper level, semantically meaningful representations of events are received by using these words. So our framework is data-driven and provides semantic outputs. Semantic similarity measurement of concepts is another important problem. In this paper we propose a method that uses conceptual status vector (CSV) and weighted semantic distance (WSD) to deal with it. Experimental results are presented which demonstrate the effectiveness of our approach on real-world videos captured from different scenes.
{"title":"Ontology-based hierarchical conceptual model for semantic representation of events in dynamic scenes","authors":"Lun Xin, T. Tan","doi":"10.1109/VSPETS.2005.1570898","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570898","url":null,"abstract":"There is an increasing interest in semantic analysis of events in dynamic scenes in recent years, and many different methods have been reported for this challenging problem. A new approach towards event modeling and analysis with semantic representations is proposed in this paper. Our method is inspired by the entity-relation model in software engineering. It integrates all related information into a hierarchical conceptual model by the name of ontology, and defines events as significant changes and mappings of conceptual units in the mode. All concepts are represented by three basic components, an entity, a word, and a set of attributes. The lower level of our framework achieves the task of feature extraction, and in the upper level, semantically meaningful representations of events are received by using these words. So our framework is data-driven and provides semantic outputs. Semantic similarity measurement of concepts is another important problem. In this paper we propose a method that uses conceptual status vector (CSV) and weighted semantic distance (WSD) to deal with it. Experimental results are presented which demonstrate the effectiveness of our approach on real-world videos captured from different scenes.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129166432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570896
M. Lee, R. Nevatia
Tracking human body pose in monocular video in the presence of image noise, imperfect foreground extraction and partial occlusion of the human body is important for many video analysis applications. Human pose tracking can be made more robust by integrating the detection of components such as face and limbs. We proposed an approach based on data-driven Markov chain Monte Carlo (DD-MCMC) where component detection results are used to generate state proposals for pose estimation and initialization. Experimental results on a realistic indoor video sequence show that the method is able to track a person during turning and sitting movements.
{"title":"Integrating component cues for human pose tracking","authors":"M. Lee, R. Nevatia","doi":"10.1109/VSPETS.2005.1570896","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570896","url":null,"abstract":"Tracking human body pose in monocular video in the presence of image noise, imperfect foreground extraction and partial occlusion of the human body is important for many video analysis applications. Human pose tracking can be made more robust by integrating the detection of components such as face and limbs. We proposed an approach based on data-driven Markov chain Monte Carlo (DD-MCMC) where component detection results are used to generate state proposals for pose estimation and initialization. Experimental results on a realistic indoor video sequence show that the method is able to track a person during turning and sitting movements.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115495584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570914
Kuan-Wen Chen, Y. Hung, Yong-Sheng Chen
Camera networks are often used in visual surveillance systems for wide-range monitoring. In this paper, we present a novel method for calibrating a camera network, which uses the trajectory of a bouncing ball as the calibration data. An important feature of our method is the use of the parabolic property of a ball's bouncing trajectory. This parabolic trajectory lies on a plane, called the parabolic trajectory plane (PT-plane), so that the relationship between the trajectory's points and their corresponding image points is a homography. Combining the vertical velocity determined by the earth's gravity and the horizontal velocity calculated from the homography, we can compute the 2D coordinates of the trajectory points on the PT-plane. By throwing the ball multiple times, we obtain calibration points on multiple planes for calibrating both intrinsic and extrinsic parameters of the networked cameras. Experimental results have demonstrated the feasibility and accuracy of the proposed method.
{"title":"On calibrating a camera network using parabolic trajectories of a bouncing ball","authors":"Kuan-Wen Chen, Y. Hung, Yong-Sheng Chen","doi":"10.1109/VSPETS.2005.1570914","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570914","url":null,"abstract":"Camera networks are often used in visual surveillance systems for wide-range monitoring. In this paper, we present a novel method for calibrating a camera network, which uses the trajectory of a bouncing ball as the calibration data. An important feature of our method is the use of the parabolic property of a ball's bouncing trajectory. This parabolic trajectory lies on a plane, called the parabolic trajectory plane (PT-plane), so that the relationship between the trajectory's points and their corresponding image points is a homography. Combining the vertical velocity determined by the earth's gravity and the horizontal velocity calculated from the homography, we can compute the 2D coordinates of the trajectory points on the PT-plane. By throwing the ball multiple times, we obtain calibration points on multiple planes for calibrating both intrinsic and extrinsic parameters of the networked cameras. Experimental results have demonstrated the feasibility and accuracy of the proposed method.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"131 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114098686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570919
P. Roth, H. Grabner, D. Skočaj, Horst Bischof, A. Leonardis
We present a novel on-line conservative learning framework for an object detection system. All algorithms operate in an on-line mode, in particular we also present a novel on-line AdaBoost method. The basic idea is to start with a very simple object detection system and to exploit a huge amount of unlabeled video data by being very conservative in selecting training examples. The key idea is to use reconstructive and discriminative classifiers in an iterative co-training fashion to arrive at increasingly better object detectors. We demonstrate the framework on a surveillance task where we learn person detectors that are tested on two surveillance video sequences. We start with a simple moving object classifier and proceed with incremental PCA (on shape and appearance) as a reconstructive classifier, which in turn generates a training set for a discriminative on-line AdaBoost classifier
{"title":"On-line Conservative Learning for Person Detection","authors":"P. Roth, H. Grabner, D. Skočaj, Horst Bischof, A. Leonardis","doi":"10.1109/VSPETS.2005.1570919","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570919","url":null,"abstract":"We present a novel on-line conservative learning framework for an object detection system. All algorithms operate in an on-line mode, in particular we also present a novel on-line AdaBoost method. The basic idea is to start with a very simple object detection system and to exploit a huge amount of unlabeled video data by being very conservative in selecting training examples. The key idea is to use reconstructive and discriminative classifiers in an iterative co-training fashion to arrive at increasingly better object detectors. We demonstrate the framework on a surveillance task where we learn person detectors that are tested on two surveillance video sequences. We start with a simple moving object classifier and proceed with incremental PCA (on shape and appearance) as a reconstructive classifier, which in turn generates a training set for a discriminative on-line AdaBoost classifier","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124968006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570891
Datong Chen, Jie Yang
This paper presents an online learning method for object tracking. Motivated by the attention shifting among local regions of a human vision system during tracking, we propose to allow different regions of an object to have different confidences. The confidence of each region is learned online to reflect the discriminative power of the region in feature space and the probability of occlusion. The distribution of region confidences is employed to guide a tracking algorithm to find correspondences in adjacent frames of video images. Only high confidence regions are tracked instead of the entire object. We demonstrate feasibility of the proposed method in video surveillance applications. The method can be combined with many other existing tracking systems to enhance robustness of these systems.
{"title":"Online learning of region confidences for object tracking","authors":"Datong Chen, Jie Yang","doi":"10.1109/VSPETS.2005.1570891","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570891","url":null,"abstract":"This paper presents an online learning method for object tracking. Motivated by the attention shifting among local regions of a human vision system during tracking, we propose to allow different regions of an object to have different confidences. The confidence of each region is learned online to reflect the discriminative power of the region in feature space and the probability of occlusion. The distribution of region confidences is employed to guide a tracking algorithm to find correspondences in adjacent frames of video images. Only high confidence regions are tracked instead of the entire object. We demonstrate feasibility of the proposed method in video surveillance applications. The method can be combined with many other existing tracking systems to enhance robustness of these systems.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123256096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-15DOI: 10.1109/VSPETS.2005.1570926
Eric Nowak, F. Jurie
In this paper we propose a framework for categorization of different types of vehicles. The difficulty comes from the high inter-class similarity and the high intra-class variability. We address this problem using a part-based recognition system. We particularly focus on the trade-off between the number of parts included in the vehicle models and the recognition rate, i.e the trade-off between fast computation and high accuracy. We propose a high-level data transformation algorithm and a feature selection scheme adapted to hierarchical SVM classifiers to improve the performance of part-based vehicle models. We have tested the proposed framework on real data acquired by infrared surveillance cameras, and on visible images too. On the infrared dataset, with the same speedup factor of 100, our accuracy is 12% better than the standard one-versus-one SVM.
{"title":"Vehicle Categorization: Parts for Speed and Accuracy","authors":"Eric Nowak, F. Jurie","doi":"10.1109/VSPETS.2005.1570926","DOIUrl":"https://doi.org/10.1109/VSPETS.2005.1570926","url":null,"abstract":"In this paper we propose a framework for categorization of different types of vehicles. The difficulty comes from the high inter-class similarity and the high intra-class variability. We address this problem using a part-based recognition system. We particularly focus on the trade-off between the number of parts included in the vehicle models and the recognition rate, i.e the trade-off between fast computation and high accuracy. We propose a high-level data transformation algorithm and a feature selection scheme adapted to hierarchical SVM classifiers to improve the performance of part-based vehicle models. We have tested the proposed framework on real data acquired by infrared surveillance cameras, and on visible images too. On the infrared dataset, with the same speedup factor of 100, our accuracy is 12% better than the standard one-versus-one SVM.","PeriodicalId":435841,"journal":{"name":"2005 IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128361602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}