Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238641
Antoine Monnet, Anurag Mittal, N. Paragios, Visvanathan Ramesh
Background modeling and subtraction is a core component in motion analysis. The central idea behind such module is to create a probabilistic representation of the static scene that is compared with the current input to perform subtraction. Such approach is efficient when the scene to be modeled refers to a static structure with limited perturbation. In this paper, we address the problem of modeling dynamic scenes where the assumption of a static background is not valid. Waving trees, beaches, escalators, natural scenes with rain or snow are examples. Inspired by the work proposed by Doretto et al. (2003), we propose an on-line auto-regressive model to capture and predict the behavior of such scenes. Towards detection of events we introduce a new metric that is based on a state-driven comparison between the prediction and the actual frame. Promising results demonstrate the potentials of the proposed framework.
{"title":"Background modeling and subtraction of dynamic scenes","authors":"Antoine Monnet, Anurag Mittal, N. Paragios, Visvanathan Ramesh","doi":"10.1109/ICCV.2003.1238641","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238641","url":null,"abstract":"Background modeling and subtraction is a core component in motion analysis. The central idea behind such module is to create a probabilistic representation of the static scene that is compared with the current input to perform subtraction. Such approach is efficient when the scene to be modeled refers to a static structure with limited perturbation. In this paper, we address the problem of modeling dynamic scenes where the assumption of a static background is not valid. Waving trees, beaches, escalators, natural scenes with rain or snow are examples. Inspired by the work proposed by Doretto et al. (2003), we propose an on-line auto-regressive model to capture and predict the behavior of such scenes. Towards detection of events we introduce a new metric that is based on a state-driven comparison between the prediction and the actual frame. Promising results demonstrate the potentials of the proposed framework.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"244 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123311335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238423
S. Gong, T. Xiang
Dynamic Probabilistic Networks (DPNs) are exploited for modeling the temporal relationships among a set of different object temporal events in the scene for a coherent and robust scene-level behaviour interpretation. In particular, we develop a Dynamically Multi-Linked Hidden Markov Model (DML-HMM) to interpret group activities involving multiple objects captured in an outdoor scene. The model is based on the discovery of salient dynamic interlinks among multiple temporal events using DPNs. Object temporal events are detected and labeled using Gaussian Mixture Models with automatic model order selection. A DML-HMM is built using Schwarz's Bayesian Information Criterion based factorisation resulting in its topology being intrinsically determined by the underlying causality and temporal order among different object events. Our experiments demonstrate that its performance on modelling group activities in a noisy outdoor scene is superior compared to that of a Multi-Observation Hidden Markov Model (MOHMM), a Parallel Hidden Markov Model (PaHMM) and a Coupled Hidden Markov Model (CHMM).
{"title":"Recognition of group activities using dynamic probabilistic networks","authors":"S. Gong, T. Xiang","doi":"10.1109/ICCV.2003.1238423","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238423","url":null,"abstract":"Dynamic Probabilistic Networks (DPNs) are exploited for modeling the temporal relationships among a set of different object temporal events in the scene for a coherent and robust scene-level behaviour interpretation. In particular, we develop a Dynamically Multi-Linked Hidden Markov Model (DML-HMM) to interpret group activities involving multiple objects captured in an outdoor scene. The model is based on the discovery of salient dynamic interlinks among multiple temporal events using DPNs. Object temporal events are detected and labeled using Gaussian Mixture Models with automatic model order selection. A DML-HMM is built using Schwarz's Bayesian Information Criterion based factorisation resulting in its topology being intrinsically determined by the underlying causality and temporal order among different object events. Our experiments demonstrate that its performance on modelling group activities in a noisy outdoor scene is superior compared to that of a Multi-Observation Hidden Markov Model (MOHMM), a Parallel Hidden Markov Model (PaHMM) and a Coupled Hidden Markov Model (CHMM).","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125939096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238459
Tomáš Werner
Multiple projections of a scene cannot be arbitrary, the allowed configurations being given by matching constraints. This paper presents new matching constraints on multiple projections of a rigid point set by uncalibrated cameras, obtained by formulation in the oriented projective rather than projective geometry. They follow from consistency of orientations of camera rays and from the fact that the scene is the affine rather that projective space. For their non-parametric nature, we call them combinatorial. The constraints are derived in a unified theoretical framework using the theory of oriented matroids. For example, we present constraints on 4 point correspondences for 2D camera resectioning, on 3 correspondences in two 1D cameras, and on 4 correspondences in two 2D cameras.
{"title":"Combinatorial constraints on multiple projections of set points","authors":"Tomáš Werner","doi":"10.1109/ICCV.2003.1238459","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238459","url":null,"abstract":"Multiple projections of a scene cannot be arbitrary, the allowed configurations being given by matching constraints. This paper presents new matching constraints on multiple projections of a rigid point set by uncalibrated cameras, obtained by formulation in the oriented projective rather than projective geometry. They follow from consistency of orientations of camera rays and from the fact that the scene is the affine rather that projective space. For their non-parametric nature, we call them combinatorial. The constraints are derived in a unified theoretical framework using the theory of oriented matroids. For example, we present constraints on 4 point correspondences for 2D camera resectioning, on 3 correspondences in two 1D cameras, and on 4 correspondences in two 2D cameras.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124803512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238398
Peter Nillius, J. Eklundh
We present a framework for calculating low-dimensional bases to represent image irradiance from surfaces with isotropic reflectance under arbitrary illumination. By representing the illumination and the bidirectional reflectance distribution function (BRDF) in frequency space, a model for the image irradiance is derived. This model is then reduced in dimensionality by analytically constructing the principal component basis for all images given the variations in both the illumination and the surface material. The principal component basis are constructed in such a way that all the symmetries (Helmholtz reciprocity and isotropy) of the BRDF are preserved in the basis functions. Using the framework we calculate a basis using a database of natural illumination and the CURET database containing BRDFs of real world surface materials.
{"title":"Phenomenological eigenfunctions for image irradiance","authors":"Peter Nillius, J. Eklundh","doi":"10.1109/ICCV.2003.1238398","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238398","url":null,"abstract":"We present a framework for calculating low-dimensional bases to represent image irradiance from surfaces with isotropic reflectance under arbitrary illumination. By representing the illumination and the bidirectional reflectance distribution function (BRDF) in frequency space, a model for the image irradiance is derived. This model is then reduced in dimensionality by analytically constructing the principal component basis for all images given the variations in both the illumination and the surface material. The principal component basis are constructed in such a way that all the symmetries (Helmholtz reciprocity and isotropy) of the BRDF are preserved in the basis functions. Using the framework we calculate a basis using a database of natural illumination and the CURET database containing BRDFs of real world surface materials.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127286419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238457
Drew Steedly, Irfan Essa, F. Dellaert
We propose a spectral partitioning approach for large-scale optimization problems, specifically structure from motion. In structure from motion, partitioning methods reduce the problem into smaller and better conditioned subproblems which can be efficiently optimized. Our partitioning method uses only the Hessian of the reprojection error and its eigenvector. We show that partitioned systems that preserve the eigenvectors corresponding to small eigenvalues result in lower residual error when optimized. We create partitions by clustering the entries of the eigenvectors of the Hessian corresponding to small eigenvalues. This is a more general technique than relying on domain knowledge and heuristics such as bottom-up structure from motion approaches. Simultaneously, it takes advantage of more information than generic matrix partitioning algorithms.
{"title":"Spectral partitioning for structure from motion","authors":"Drew Steedly, Irfan Essa, F. Dellaert","doi":"10.1109/ICCV.2003.1238457","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238457","url":null,"abstract":"We propose a spectral partitioning approach for large-scale optimization problems, specifically structure from motion. In structure from motion, partitioning methods reduce the problem into smaller and better conditioned subproblems which can be efficiently optimized. Our partitioning method uses only the Hessian of the reprojection error and its eigenvector. We show that partitioned systems that preserve the eigenvectors corresponding to small eigenvalues result in lower residual error when optimized. We create partitions by clustering the entries of the eigenvectors of the Hessian corresponding to small eigenvalues. This is a more general technique than relying on domain knowledge and heuristics such as bottom-up structure from motion approaches. Simultaneously, it takes advantage of more information than generic matrix partitioning algorithms.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126907404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238381
J. Gluckman
A commonly used representation of a visual pattern is the set of marginal probability distributions of the output of a bank of filters (Gaussian, Laplacian, Gabor etc.). This representation has been used effectively for a variety of vision tasks including texture classification, texture synthesis, object detection and image retrieval. We examine the ability of this representation to discriminate between an arbitrary pair of visual stimuli. Examples of patterns are derived that provably possess the same marginal statistical properties, yet are "visually distinct." These results suggest the need for either employing a large and diverse filter bank or incorporating joint statistics in order to represent a large class of visual patterns.
{"title":"On the use of marginal statistics of subband images","authors":"J. Gluckman","doi":"10.1109/ICCV.2003.1238381","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238381","url":null,"abstract":"A commonly used representation of a visual pattern is the set of marginal probability distributions of the output of a bank of filters (Gaussian, Laplacian, Gabor etc.). This representation has been used effectively for a variety of vision tasks including texture classification, texture synthesis, object detection and image retrieval. We examine the ability of this representation to discriminate between an arbitrary pair of visual stimuli. Examples of patterns are derived that provably possess the same marginal statistical properties, yet are \"visually distinct.\" These results suggest the need for either employing a large and diverse filter bank or incorporating joint statistics in order to represent a large class of visual patterns.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126740356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238319
Mengqin Sun, A. Jepson, E. Fiume
There are many challenges associated with the integration of synthetic and real imagery. One particularly difficult problem is the automatic extraction of salient parameters of natural phenomena in real video footage for subsequent application to synthetic objects. We can ensure that the hair and clothing of a synthetic actor placed in a meadow of swaying grass will move consistently with the wind that moved that grass. The video footage can be seen as a controller for the motion of synthetic features, a concept we call video input driven animation (VIDA). We propose a schema that analyzes an input video sequence, extracts parameters from the motion of objects in the video, and uses this information to drive the motion of synthetic objects. To validate the principles of VIDA, we approximate the inverse problem to harmonic oscillation, which we use to extract parameters of wind and of regular water waves. We observe the effect of wind on a tree in a video, estimate wind speed parameters from its motion, and then use this to make synthetic objects move. We also extract water elevation parameters from the observed motion of boats and apply the resulting water waves to synthetic boats.
{"title":"Video input driven animation (VIDA)","authors":"Mengqin Sun, A. Jepson, E. Fiume","doi":"10.1109/ICCV.2003.1238319","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238319","url":null,"abstract":"There are many challenges associated with the integration of synthetic and real imagery. One particularly difficult problem is the automatic extraction of salient parameters of natural phenomena in real video footage for subsequent application to synthetic objects. We can ensure that the hair and clothing of a synthetic actor placed in a meadow of swaying grass will move consistently with the wind that moved that grass. The video footage can be seen as a controller for the motion of synthetic features, a concept we call video input driven animation (VIDA). We propose a schema that analyzes an input video sequence, extracts parameters from the motion of objects in the video, and uses this information to drive the motion of synthetic objects. To validate the principles of VIDA, we approximate the inverse problem to harmonic oscillation, which we use to extract parameters of wind and of regular water waves. We observe the effect of wind on a tree in a video, estimate wind speed parameters from its motion, and then use this to make synthetic objects move. We also extract water elevation parameters from the observed motion of boats and apply the resulting water waves to synthetic boats.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126255258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238647
Xianghua Ying, Zhanyi Hu
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines in space are projected into conics in the catadioptric image plane as well as spheres in space. We proved that the projection of a line can provide three invariants whereas the projection of a sphere can provide two. From these invariants, constraint equations for the intrinsic parameters of catadioptric camera are derived. Therefore, there are two variants of this novel method. The first one uses the projections of lines and the second one uses the projections of spheres. In general, the projections of two lines or three spheres are sufficient to achieve the catadioptric camera calibration. One important observation in this paper is that the method based on the projections of spheres is more robust and has higher accuracy than that using the projections of lines. The performances of our method are demonstrated by the results of simulations and experiments with real images.
{"title":"Catadioptric camera calibration using geometric invariants","authors":"Xianghua Ying, Zhanyi Hu","doi":"10.1109/ICCV.2003.1238647","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238647","url":null,"abstract":"Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines in space are projected into conics in the catadioptric image plane as well as spheres in space. We proved that the projection of a line can provide three invariants whereas the projection of a sphere can provide two. From these invariants, constraint equations for the intrinsic parameters of catadioptric camera are derived. Therefore, there are two variants of this novel method. The first one uses the projections of lines and the second one uses the projections of spheres. In general, the projections of two lines or three spheres are sufficient to achieve the catadioptric camera calibration. One important observation in this paper is that the method based on the projections of spheres is more robust and has higher accuracy than that using the projections of lines. The performances of our method are demonstrated by the results of simulations and experiments with real images.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"1938 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128778825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238330
Xilin Chen, Jie Yang, A. Waibel
Visual surveillance using a camera network has imposed new challenges to camera calibration. An essential problem is that a large number of cameras may not have a common field of view or even be synchronized well. We propose to use a hybrid camera network that consists of catadioptric and perspective cameras for a visual surveillance task. The relations between multiple views of a scene captured from different cameras can be then calibrated under the catadioptric camera's coordinate system. We address the important issue of how to calibrate the hybrid camera network. We calibrate the hybrid camera network in three steps. First, we calibrate the catadioptric camera using only the vanishing points. In order to reduce computational complexity, we calibrate the camera without the mirror first and then calibrate the catadioptric camera system. Second, we determine 3D positions of some points using as few as two spatial parallel lines and some equidistance points. Finally, we calibrate other perspective cameras based on these known spatial points.
{"title":"Calibration of a hybrid camera network","authors":"Xilin Chen, Jie Yang, A. Waibel","doi":"10.1109/ICCV.2003.1238330","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238330","url":null,"abstract":"Visual surveillance using a camera network has imposed new challenges to camera calibration. An essential problem is that a large number of cameras may not have a common field of view or even be synchronized well. We propose to use a hybrid camera network that consists of catadioptric and perspective cameras for a visual surveillance task. The relations between multiple views of a scene captured from different cameras can be then calibrated under the catadioptric camera's coordinate system. We address the important issue of how to calibrate the hybrid camera network. We calibrate the hybrid camera network in three steps. First, we calibrate the catadioptric camera using only the vanishing points. In order to reduce computational complexity, we calibrate the camera without the mirror first and then calibrate the catadioptric camera system. Second, we determine 3D positions of some points using as few as two spatial parallel lines and some equidistance points. Finally, we calibrate other perspective cameras based on these known spatial points.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129036718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-13DOI: 10.1109/ICCV.2003.1238638
S. Dockstader, Nikita S. Imennov, A. Tekalp
This paper presents a new method of detecting and predicting motion tracking failures with applications in human motion and gait analysis. We define a tracking failure as an event and describe its temporal characteristics using a hidden Markov model (HMM). This stochastic model is trained using previous examples of tracking failures. We derive vector observations for the HMM using the noise covariance matrices characterizing a tracked, 3D structural model of the human body. We show a causal relationship between the conditional output probability of the HMM, as transformed using a logarithmic mapping function, and impending tracking failures. Results are illustrated on several multi-view sequences of complex human motion.
{"title":"Markov-based failure prediction for human motion analysis","authors":"S. Dockstader, Nikita S. Imennov, A. Tekalp","doi":"10.1109/ICCV.2003.1238638","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238638","url":null,"abstract":"This paper presents a new method of detecting and predicting motion tracking failures with applications in human motion and gait analysis. We define a tracking failure as an event and describe its temporal characteristics using a hidden Markov model (HMM). This stochastic model is trained using previous examples of tracking failures. We derive vector observations for the HMM using the noise covariance matrices characterizing a tracked, 3D structural model of the human body. We show a causal relationship between the conditional output probability of the HMM, as transformed using a logarithmic mapping function, and impending tracking failures. Results are illustrated on several multi-view sequences of complex human motion.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"56 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116576303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}