Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784654
C. Stauffer
While a tracking system is unaware of the identity of any object it tracks, the identity remains the same for the entire tracking sequence. Our system leverages this information by using accumulated joint cooccurrences of the representations within the sequence to create a hierarchical binary-tree classifier of the representations. This classifier is useful to classify sequences as well as individual instances. We illustrate the use of this method on two separate representations the tracked object's position, movement, and size; and the tracked object's binary motion silhouettes.
{"title":"Automatic hierarchical classification using time-based co-occurrences","authors":"C. Stauffer","doi":"10.1109/CVPR.1999.784654","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784654","url":null,"abstract":"While a tracking system is unaware of the identity of any object it tracks, the identity remains the same for the entire tracking sequence. Our system leverages this information by using accumulated joint cooccurrences of the representations within the sequence to create a hierarchical binary-tree classifier of the representations. This classifier is useful to classify sequences as well as individual instances. We illustrate the use of this method on two separate representations the tracked object's position, movement, and size; and the tracked object's binary motion silhouettes.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"88 1","pages":"333-339 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86851785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784724
O. Musse, F. Heitz, J. Armspach
This paper presents a hierarchical framework to perform deformable matching of three dimensional (3D) images. 3D shape deformations are parameterized at different scales, using a decomposition of the continuous deformation vector field over a sequence of nested subspaces, generated from a single scaling function. The parameterization of the field enables to enforce smoothness and differentiability constraints without performing explicit regularization. A global energy function, depending on the reference image and the transformed one, is minimized via a coarse-to-fine algorithm over this multiscale decomposition. Contrary to standard multigrid approaches, no reduction of image data is applied. The continuous field of deformation is always sampled at the same resolution, ensuring that the same energy function is handled at each scale and that the energy decreases at each step of the minimization.
{"title":"3D deformable image matching using multiscale minimization of global energy functions","authors":"O. Musse, F. Heitz, J. Armspach","doi":"10.1109/CVPR.1999.784724","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784724","url":null,"abstract":"This paper presents a hierarchical framework to perform deformable matching of three dimensional (3D) images. 3D shape deformations are parameterized at different scales, using a decomposition of the continuous deformation vector field over a sequence of nested subspaces, generated from a single scaling function. The parameterization of the field enables to enforce smoothness and differentiability constraints without performing explicit regularization. A global energy function, depending on the reference image and the transformed one, is minimized via a coarse-to-fine algorithm over this multiscale decomposition. Contrary to standard multigrid approaches, no reduction of image data is applied. The continuous field of deformation is always sampled at the same resolution, ensuring that the same energy function is handled at each scale and that the energy decreases at each step of the minimization.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"3 1","pages":"478-484 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88010025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784626
D. Bouchaffra, Venu Govindaraju, S. Srihari
This paper presents nonstationary Markovian models and their application to recognition of strings of tokens, such as ZIP codes in the US mailstream. Unlike traditional approaches where digits are simply recognized in isolation, the novelty of our approach lies in the manner in which recognitions scores along with domain specific knowledge about the frequency distribution of various combination of digits are all integrated into one unified model. The domain knowledge is derived from postal directory files. This data feeds into the models as n-grams statistics that are seamlessly integrated with recognition scores of digit images. We present the recognition accuracy (90%) achieved on a set of 20,000 ZIP codes.
{"title":"Recognition of strings using nonstationary Markovian models: an application in ZIP code recognition","authors":"D. Bouchaffra, Venu Govindaraju, S. Srihari","doi":"10.1109/CVPR.1999.784626","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784626","url":null,"abstract":"This paper presents nonstationary Markovian models and their application to recognition of strings of tokens, such as ZIP codes in the US mailstream. Unlike traditional approaches where digits are simply recognized in isolation, the novelty of our approach lies in the manner in which recognitions scores along with domain specific knowledge about the frequency distribution of various combination of digits are all integrated into one unified model. The domain knowledge is derived from postal directory files. This data feeds into the models as n-grams statistics that are seamlessly integrated with recognition scores of digit images. We present the recognition accuracy (90%) achieved on a set of 20,000 ZIP codes.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"32 1","pages":"174-179 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84387926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.786927
James J. Clark, H. Pekau
In this paper we present an integral formulation of the active differential photometric stereo algorithm proposed by Clark (1992) and by Iwahori et al. [1992, 1994). The algorithm presented in this paper does not require measurement of derivatives of image quantities, but requires instead the computation of integrals of image quantities. Thus the algorithm is more robust to sensor noise and light source position errors than the Clark-Iwahori algorithm. We show that the algorithm presented in the paper can be efficiently implemented in practice with a planar distributed light source, and present experimental results demonstrating the efficacy of the algorithm.
{"title":"An integral formulation for differential photometric stereo","authors":"James J. Clark, H. Pekau","doi":"10.1109/CVPR.1999.786927","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786927","url":null,"abstract":"In this paper we present an integral formulation of the active differential photometric stereo algorithm proposed by Clark (1992) and by Iwahori et al. [1992, 1994). The algorithm presented in this paper does not require measurement of derivatives of image quantities, but requires instead the computation of integrals of image quantities. Thus the algorithm is more robust to sensor noise and light source position errors than the Clark-Iwahori algorithm. We show that the algorithm presented in the paper can be efficiently implemented in practice with a planar distributed light source, and present experimental results demonstrating the efficacy of the algorithm.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"17 1","pages":"119-124 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91045048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784721
G. Gordon, Trevor Darrell, M. Harville, J. Woodfill
Background estimation and removal based on the joint use of range and color data produces superior results than can be achieved with either data source alone. This is increasingly relevant as inexpensive, real-time, passive range systems become more accessible through novel hardware and increased CPU processing speeds. Range is a powerful signal for segmentation which is largely independent of color and hence not effected by the classic color segmentation problems of shadows and objects with color similar to the background. However range alone is also not sufficient for the good segmentation: depth measurements are rarely available at all pixels in the scene, and foreground objects may be indistinguishable in depth when they are close to the background. Color segmentation is complementary in these cases. Surprisingly, little work has been done to date on joint range and color segmentation. We describe and demonstrate a background estimation method based on a multidimensional (range and color) clustering at each image pixel. Segmentation of the foreground in a given frame is performed via comparison with background statistics in range and normalized color. Important implementation issues such as treatment of shadows and low confidence measurements are discussed in detail.
{"title":"Background estimation and removal based on range and color","authors":"G. Gordon, Trevor Darrell, M. Harville, J. Woodfill","doi":"10.1109/CVPR.1999.784721","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784721","url":null,"abstract":"Background estimation and removal based on the joint use of range and color data produces superior results than can be achieved with either data source alone. This is increasingly relevant as inexpensive, real-time, passive range systems become more accessible through novel hardware and increased CPU processing speeds. Range is a powerful signal for segmentation which is largely independent of color and hence not effected by the classic color segmentation problems of shadows and objects with color similar to the background. However range alone is also not sufficient for the good segmentation: depth measurements are rarely available at all pixels in the scene, and foreground objects may be indistinguishable in depth when they are close to the background. Color segmentation is complementary in these cases. Surprisingly, little work has been done to date on joint range and color segmentation. We describe and demonstrate a background estimation method based on a multidimensional (range and color) clustering at each image pixel. Segmentation of the foreground in a given frame is performed via comparison with background statistics in range and normalized color. Important implementation issues such as treatment of shadows and low confidence measurements are discussed in detail.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"19 1","pages":"459-464 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91275415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784714
R. Swaminathan, S. Nayar
Images taken with wide-angle cameras tend to have severe distortions which pull points towards the optical center. This paper proposes a method for recovering the distortion parameters without the use of any calibration objects. The distortions cause straight lines in the scene to appear as curves in the image. Our algorithm seeks to find the distortion parameters that would map the image curves to straight lines. The user selects a small set of points along the image curves. Recovery of the parameters is formulated as the minimization of an objective function which is designed to explicitly account for noise in the selected image points. Experimental results are presented for synthetic data with different noise levels as well as for real images. Once calibrated, the image streams from these cameras can be undistorted in real time using look up tables. We also present an application of this calibration method for wide-angle camera clusters, which we call polycameras. We apply our distortion correction technique to a polycamera with four wide-angle cameras to create a high resolution 360 degree panorama in real-time.
{"title":"Non-metric calibration of wide-angle lenses and polycameras","authors":"R. Swaminathan, S. Nayar","doi":"10.1109/CVPR.1999.784714","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784714","url":null,"abstract":"Images taken with wide-angle cameras tend to have severe distortions which pull points towards the optical center. This paper proposes a method for recovering the distortion parameters without the use of any calibration objects. The distortions cause straight lines in the scene to appear as curves in the image. Our algorithm seeks to find the distortion parameters that would map the image curves to straight lines. The user selects a small set of points along the image curves. Recovery of the parameters is formulated as the minimization of an objective function which is designed to explicitly account for noise in the selected image points. Experimental results are presented for synthetic data with different noise levels as well as for real images. Once calibrated, the image streams from these cameras can be undistorted in real time using look up tables. We also present an application of this calibration method for wide-angle camera clusters, which we call polycameras. We apply our distortion correction technique to a polycamera with four wide-angle cameras to create a high resolution 360 degree panorama in real-time.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"1 1","pages":"413-419 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89980184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784710
C. Stewart, Kishore Bubna, A. Perera
Many problems in computer vision require estimation of both model parameters and boundaries, which limits the usefulness of standard estimation techniques from statistics. Example problems include surface reconstruction from range data, estimation of parametric motion models, fitting circular or elliptic arcs to edgel data, and many others. This paper introduces a new estimation technique, called the "Domain Bounding M-Estimator", which is a generalization of ordinary M-estimators combining error measures on model parameters and boundaries in a joint, robust objective function. Minimization of the objective function given a rough initialization yields simultaneous estimates of parameters and boundaries. The DBM-Estimator has been applied to estimating line segments, surfaces, and the symmetry transformation between two edgel chains. It is unaffected by outliers and prevents boundary estimates from crossing even small magnitude discontinuities.
{"title":"Estimating model parameters and boundaries by minimizing a joint, robust objective function","authors":"C. Stewart, Kishore Bubna, A. Perera","doi":"10.1109/CVPR.1999.784710","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784710","url":null,"abstract":"Many problems in computer vision require estimation of both model parameters and boundaries, which limits the usefulness of standard estimation techniques from statistics. Example problems include surface reconstruction from range data, estimation of parametric motion models, fitting circular or elliptic arcs to edgel data, and many others. This paper introduces a new estimation technique, called the \"Domain Bounding M-Estimator\", which is a generalization of ordinary M-estimators combining error measures on model parameters and boundaries in a joint, robust objective function. Minimization of the objective function given a rough initialization yields simultaneous estimates of parameters and boundaries. The DBM-Estimator has been applied to estimating line segments, surfaces, and the symmetry transformation between two edgel chains. It is unaffected by outliers and prevents boundary estimates from crossing even small magnitude discontinuities.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"6 1","pages":"387-393 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89981043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.786920
S. Scherer, A. Pinz, P. Werth
Perspective distortion, occlusion and specular reflection are challenging problems in shape-from-stereo. In this paper we review one recently published area-based stereo matching algorithm (Bhat and Nayar, 1998) designed to be robust in these cases. Although the algorithm is an important contribution to stereo-matching, we show that its coefficient has a low discriminatory power, which leads to a significant number of multiple best matches. In order to cope with this drawback we introduce a new normalized ordinal correlation coefficient. Experiments showing the behavior of the proposed coefficient are performed on various datasets including real data with ground truth. The new coefficient reduces the occurrence of multiple best matches to almost zero per cent. It also shows a more robust and equally accurate behavior. These benefits are achieved at almost no additional computational costs.
{"title":"The discriminatory power of ordinal measures - towards a new coefficient","authors":"S. Scherer, A. Pinz, P. Werth","doi":"10.1109/CVPR.1999.786920","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786920","url":null,"abstract":"Perspective distortion, occlusion and specular reflection are challenging problems in shape-from-stereo. In this paper we review one recently published area-based stereo matching algorithm (Bhat and Nayar, 1998) designed to be robust in these cases. Although the algorithm is an important contribution to stereo-matching, we show that its coefficient has a low discriminatory power, which leads to a significant number of multiple best matches. In order to cope with this drawback we introduce a new normalized ordinal correlation coefficient. Experiments showing the behavior of the proposed coefficient are performed on various datasets including real data with ground truth. The new coefficient reduces the occurrence of multiple best matches to almost zero per cent. It also shows a more robust and equally accurate behavior. These benefits are achieved at almost no additional computational costs.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"29 1","pages":"76-81 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84421310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784615
Raouf Hamdan, F. Heitz, L. Thoraval
A generic approach for the extraction and recognition of gesture using raw grey-level images is presented. The probabilistic visual learning approach, a learning method recently proposed by B. Moghaddam and A. Pentland (1997), is used to create a set of compact statistical representations of gesture appearance on low dimensional eigenspaces. The same probabilistic modeling framework is used to extract and track gesture and to perform gesture recognition over long image sequences. Gesture extraction and tracking are based on maximum likelihood gesture detection in the input image. Recognition is performed by using the set of learned probabilistic appearance models as estimates of the emission probabilities of a continuous density hidden Markov model (CDHMM). Although the segmentation and CDHMM-based recognition use raw grey-level images, the method is fast, thanks to the data compression obtained by probabilistic visual learning. The approach is comprehensive and may be applied to other visual motion recognition tasks. It does not require application-tailored extraction of image features, the use of markers or gloves. A real-time implementation of the method on a standard PC-based vision system is under consideration.
{"title":"Gesture localization and recognition using probabilistic visual learning","authors":"Raouf Hamdan, F. Heitz, L. Thoraval","doi":"10.1109/CVPR.1999.784615","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784615","url":null,"abstract":"A generic approach for the extraction and recognition of gesture using raw grey-level images is presented. The probabilistic visual learning approach, a learning method recently proposed by B. Moghaddam and A. Pentland (1997), is used to create a set of compact statistical representations of gesture appearance on low dimensional eigenspaces. The same probabilistic modeling framework is used to extract and track gesture and to perform gesture recognition over long image sequences. Gesture extraction and tracking are based on maximum likelihood gesture detection in the input image. Recognition is performed by using the set of learned probabilistic appearance models as estimates of the emission probabilities of a continuous density hidden Markov model (CDHMM). Although the segmentation and CDHMM-based recognition use raw grey-level images, the method is fast, thanks to the data compression obtained by probabilistic visual learning. The approach is comprehensive and may be applied to other visual motion recognition tasks. It does not require application-tailored extraction of image features, the use of markers or gloves. A real-time implementation of the method on a standard PC-based vision system is under consideration.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"25 1","pages":"98-103 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83253201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-23DOI: 10.1109/CVPR.1999.784652
Ross Cutler, L. Davis
We describe a new technique to detect and analyze periodic motion as seen from both a static and moving camera. By tracking objects of interest, we compute an object's self-similarity as it evolves in time. For periodic motion, the self-similarity measure is also periodic, and we apply time-frequency analysis to detect and characterize the periodic motion. A real-time system has been implemented to track and classify objects using periodicity. Examples of object classification, person counting, and non-stationary periodicity are provided.
{"title":"Real-time periodic motion detection, analysis, and applications","authors":"Ross Cutler, L. Davis","doi":"10.1109/CVPR.1999.784652","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784652","url":null,"abstract":"We describe a new technique to detect and analyze periodic motion as seen from both a static and moving camera. By tracking objects of interest, we compute an object's self-similarity as it evolves in time. For periodic motion, the self-similarity measure is also periodic, and we apply time-frequency analysis to detect and characterize the periodic motion. A real-time system has been implemented to track and classify objects using periodicity. Examples of object classification, person counting, and non-stationary periodicity are provided.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"37 1","pages":"326-332 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82993643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}