Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710845
Zhengyou Zhang
We present in this paper a system which automatically builds, from real images, a scene model containing both 3D geometric information of the scene structure and its photometric information under various illumination conditions. The geometric structure is recovered from images taken from distinct viewpoints. Structure-from-motion and correlation-based stereo techniques are used to match pixels between images of different viewpoints and to reconstruct the scene in 3D space. The photometric property is extracted from images taken under different illumination conditions (orientation, position and intensity of the light sources). This is achieved by computing a low-dimensional linear space of the spatio-illumination volume, and is represented by a set of basis images. The model that has been built can be used to create realistic renderings from different viewpoints and illumination conditions. Applications include object recognition, virtual reality and product advertisement.
{"title":"Modeling geometric structure and illumination variation of a scene from real images","authors":"Zhengyou Zhang","doi":"10.1109/ICCV.1998.710845","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710845","url":null,"abstract":"We present in this paper a system which automatically builds, from real images, a scene model containing both 3D geometric information of the scene structure and its photometric information under various illumination conditions. The geometric structure is recovered from images taken from distinct viewpoints. Structure-from-motion and correlation-based stereo techniques are used to match pixels between images of different viewpoints and to reconstruct the scene in 3D space. The photometric property is extracted from images taken under different illumination conditions (orientation, position and intensity of the light sources). This is achieved by computing a low-dimensional linear space of the spatio-illumination volume, and is represented by a set of basis images. The model that has been built can be used to create realistic renderings from different viewpoints and illumination conditions. Applications include object recognition, virtual reality and product advertisement.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"19 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121010786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710833
C. Silva, J. Santos-Victor
We address the problem of egomotion estimation of a monocular observer moving with arbitrary translation and rotation in an unknown environment, using log-polar images. The method we propose is uniquely based on the spatio-temporal image derivatives, or the normal flow. Thus, we avoid computing the complete optical flow field, which is an ill-posed problem due to the aperture problem. We use a search paradigm based on geometric properties of the normal flow field, and consider a family of search subspaces to estimate the egomotion parameters. These algorithms are particularly well-suited for the log-polar image geometry, as we use a selection of special normal flow, vectors with simple representation in log-polar coordinates. This approach highlights the close coupling between algorithmic aspects and the sensor geometry (retina physiology), often, found in nature. Finally, we present and discuss a set of experiments, for various kinds of camera motions, which show encouraging results.
{"title":"Egomotion estimation using log-polar images","authors":"C. Silva, J. Santos-Victor","doi":"10.1109/ICCV.1998.710833","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710833","url":null,"abstract":"We address the problem of egomotion estimation of a monocular observer moving with arbitrary translation and rotation in an unknown environment, using log-polar images. The method we propose is uniquely based on the spatio-temporal image derivatives, or the normal flow. Thus, we avoid computing the complete optical flow field, which is an ill-posed problem due to the aperture problem. We use a search paradigm based on geometric properties of the normal flow field, and consider a family of search subspaces to estimate the egomotion parameters. These algorithms are particularly well-suited for the log-polar image geometry, as we use a selection of special normal flow, vectors with simple representation in log-polar coordinates. This approach highlights the close coupling between algorithmic aspects and the sensor geometry (retina physiology), often, found in nature. Finally, we present and discuss a set of experiments, for various kinds of camera motions, which show encouraging results.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125732390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710842
R. Henkel
A fast stereo algorithm based on aliasing effects of simple disparity estimators within a coherence detection scheme is presented. The algorithm calculates dense disparity maps with subpixel-precision by performing local spatial filter operations and simple arithmetic transformations. Performance similar to classical area-based approaches is achieved, but without the complicated hierarchical search structure typical for these approaches. The algorithm is completely parallel; the disparity valves are calculated independently for each pixel. In addition, local validation counts for the disparity estimates and a fused cyclopean view of the scene are available within the proposed network structure for coherence-based stereo.
{"title":"Fast stereovision with subpixel-precision","authors":"R. Henkel","doi":"10.1109/ICCV.1998.710842","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710842","url":null,"abstract":"A fast stereo algorithm based on aliasing effects of simple disparity estimators within a coherence detection scheme is presented. The algorithm calculates dense disparity maps with subpixel-precision by performing local spatial filter operations and simple arithmetic transformations. Performance similar to classical area-based approaches is achieved, but without the complicated hierarchical search structure typical for these approaches. The algorithm is completely parallel; the disparity valves are calculated independently for each pixel. In addition, local validation counts for the disparity estimates and a fused cyclopean view of the scene are available within the proposed network structure for coherence-based stereo.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123302457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710831
H. Shum, R. Szeliski
This paper presents techniques for constructing full view panoramic mosaics form sequences of images. Our representation associates a rotation matrix (and optionally a focal length) with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In order to reduce accumulated registration errors we apply global alignment (block adjustment) to whole sequence of images, which results in an optimal image mosaic (in the least squares sense). To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions we develop a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment we significantly improve the quality of our image mosaics thereby enabling the creation of full view panoramic mosaics with hand-held cameras.
{"title":"Construction and refinement of panoramic mosaics with global and local alignment","authors":"H. Shum, R. Szeliski","doi":"10.1109/ICCV.1998.710831","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710831","url":null,"abstract":"This paper presents techniques for constructing full view panoramic mosaics form sequences of images. Our representation associates a rotation matrix (and optionally a focal length) with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In order to reduce accumulated registration errors we apply global alignment (block adjustment) to whole sequence of images, which results in an optimal image mosaic (in the least squares sense). To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions we develop a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment we significantly improve the quality of our image mosaics thereby enabling the creation of full view panoramic mosaics with hand-held cameras.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131459357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710733
P. Teo, G. Sapiro, B. Wandell
We describe a system that is being used to segment gray matter and create connected cortical representations from MRI. The method exploits knowledge of the anatomy of the cortex and incorporates structural constraints into the segmentation. First, the white matter and CSF regions in the MR volume are segmented using some novel techniques of posterior anisotropic diffusion. Then, the user selects the cortical white matter component of interest, and its structure is verified by checking for cavities and handles. After this, a connected representation of the gray matter is created by a constrained growing-out from the white matter boundary. Because the connectivity is computed, the segmentation can be used as input to several methods of visualizing the spatial pattern of cortical activity within gray matter. In our case, the connected representation of gray matter is used to create a representation of the flattened cortex. Then, fMRI measurements are overlaid on the flattened representation, yielding a representation of the volumetric data within a single image.
{"title":"Segmenting cortical gray matter for functional MRI visualization","authors":"P. Teo, G. Sapiro, B. Wandell","doi":"10.1109/ICCV.1998.710733","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710733","url":null,"abstract":"We describe a system that is being used to segment gray matter and create connected cortical representations from MRI. The method exploits knowledge of the anatomy of the cortex and incorporates structural constraints into the segmentation. First, the white matter and CSF regions in the MR volume are segmented using some novel techniques of posterior anisotropic diffusion. Then, the user selects the cortical white matter component of interest, and its structure is verified by checking for cavities and handles. After this, a connected representation of the gray matter is created by a constrained growing-out from the white matter boundary. Because the connectivity is computed, the segmentation can be used as input to several methods of visualizing the spatial pattern of cortical activity within gray matter. In our case, the connected representation of gray matter is used to create a representation of the flattened cortex. Then, fMRI measurements are overlaid on the flattened representation, yielding a representation of the volumetric data within a single image.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126288086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710832
M. Irani, P. Anandan
This paper presents a method for alignment of images acquired by sensors of different modalities (e.g., EO and IR). The paper has two main contributions: (i) It identifies an appropriate image representation, for multi-sensor alignment, i.e., a representation which emphasizes the common information between the two multi-sensor images, suppresses the non-common information, and is adequate for coarse-to-fine processing. (ii) It presents a new alignment technique which applies global estimation to any choice of a local similarity measure. In particular, it is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images. Our proposed image representation does not rely on sparse image features (e.g., edge, contour, or point features). It is continuous and does not eliminate the detailed variations within local image regions. Our method naturally extends to coarse-to-fine processing, and applies even in situations when the multi-sensor signals are globally characterized by low statistical correlation.
{"title":"Robust multi-sensor image alignment","authors":"M. Irani, P. Anandan","doi":"10.1109/ICCV.1998.710832","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710832","url":null,"abstract":"This paper presents a method for alignment of images acquired by sensors of different modalities (e.g., EO and IR). The paper has two main contributions: (i) It identifies an appropriate image representation, for multi-sensor alignment, i.e., a representation which emphasizes the common information between the two multi-sensor images, suppresses the non-common information, and is adequate for coarse-to-fine processing. (ii) It presents a new alignment technique which applies global estimation to any choice of a local similarity measure. In particular, it is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images. Our proposed image representation does not rely on sparse image features (e.g., edge, contour, or point features). It is continuous and does not eliminate the detailed variations within local image regions. Our method naturally extends to coarse-to-fine processing, and applies even in situations when the multi-sensor signals are globally characterized by low statistical correlation.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131945188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710710
T. Jebara, Kenneth B. Russell, A. Pentland
We describe a face modeling system which estimates complete facial structure and texture from a real-time video stream. The system begins with a face trading algorithm which detects and stabilizes live facial images into a canonical 3D pose. The resulting canonical texture is then processed by a statistical model to filter imperfections and estimate unknown components such as missing pixels and underlying 3D structure. This statistical model is a soft mixture of eigenfeature selectors which span the 3D deformations and texture changes across a training set of laser scanned faces. An iterative algorithm is introduced for determining the dimensional partitioning of the eigenfeatures to maximize their generalization capability over a cross-validation set of data. The model's abilities to filter and estimate absent facial components are then demonstrated over incomplete 3D data. This ultimately allows the model to span known and regress unknown facial information front stabilized natural video sequences generated by a face tracking algorithm. The resulting continuous and dynamic estimation of the model's parameters over a video sequence generates a compact temporal description of the 3D deformations and texture changes of the face.
{"title":"Mixtures of eigenfeatures for real-time structure from texture","authors":"T. Jebara, Kenneth B. Russell, A. Pentland","doi":"10.1109/ICCV.1998.710710","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710710","url":null,"abstract":"We describe a face modeling system which estimates complete facial structure and texture from a real-time video stream. The system begins with a face trading algorithm which detects and stabilizes live facial images into a canonical 3D pose. The resulting canonical texture is then processed by a statistical model to filter imperfections and estimate unknown components such as missing pixels and underlying 3D structure. This statistical model is a soft mixture of eigenfeature selectors which span the 3D deformations and texture changes across a training set of laser scanned faces. An iterative algorithm is introduced for determining the dimensional partitioning of the eigenfeatures to maximize their generalization capability over a cross-validation set of data. The model's abilities to filter and estimate absent facial components are then demonstrated over incomplete 3D data. This ultimately allows the model to span known and regress unknown facial information front stabilized natural video sequences generated by a face tracking algorithm. The resulting continuous and dynamic estimation of the model's parameters over a video sequence generates a compact temporal description of the 3D deformations and texture changes of the face.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131848398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710735
R. Malladi, J. Sethian
In this paper, we present a shape recovery technique in 2D and 3D with specific applications in visualizing and measuring anatomical shapes from medical images. This algorithm models extremely corrugated structures like the brain, is topologically adaptable, is robust, and runs in O(N log N) time where N is the total number of points in the domain. Our two-stage technique is based on the level set shape recovery scheme and the fast marching method for computing solutions to static Hamilton-Jacobi equations.
{"title":"A real-time algorithm for medical shape recovery","authors":"R. Malladi, J. Sethian","doi":"10.1109/ICCV.1998.710735","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710735","url":null,"abstract":"In this paper, we present a shape recovery technique in 2D and 3D with specific applications in visualizing and measuring anatomical shapes from medical images. This algorithm models extremely corrugated structures like the brain, is topologically adaptable, is robust, and runs in O(N log N) time where N is the total number of points in the domain. Our two-stage technique is based on the level set shape recovery scheme and the fast marching method for computing solutions to static Hamilton-Jacobi equations.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710813
Zhibin Lei, T. Tasdizen, D. Cooper
We present completely new very powerful solutions to two fundamental problems central to computer vision. Given data sets representing C objects to be stored in a database, and given a new data set for an object, determine the object in the database that is most like the object measured. We solve this problem through use of PIMs ("Polynomial Interpolated Measures"), which is a new representation integrating implicit polynomial curves and surfaces, explicit polynomials, and discrete data sets which may be sparse. The method provides high accuracy at low computational cost. 2. Given noisy 2D data along a curve (or 3D data along a surface), decompose the data into patches such that new data taken along affine transformations or Euclidean transformations of the curve (or surface) can be decomposed into corresponding patches. Then recognition of complex or partially occluded objects can be done in terms of invariantly determined patches. We briefly outline a low computational cost image-database indexing-system based on this representation for objects having complex shape-geometry.
{"title":"PIMs and invariant parts for shape recognition","authors":"Zhibin Lei, T. Tasdizen, D. Cooper","doi":"10.1109/ICCV.1998.710813","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710813","url":null,"abstract":"We present completely new very powerful solutions to two fundamental problems central to computer vision. Given data sets representing C objects to be stored in a database, and given a new data set for an object, determine the object in the database that is most like the object measured. We solve this problem through use of PIMs (\"Polynomial Interpolated Measures\"), which is a new representation integrating implicit polynomial curves and surfaces, explicit polynomials, and discrete data sets which may be sparse. The method provides high accuracy at low computational cost. 2. Given noisy 2D data along a curve (or 3D data along a surface), decompose the data into patches such that new data taken along affine transformations or Euclidean transformations of the curve (or surface) can be decomposed into corresponding patches. Then recognition of complex or partially occluded objects can be done in terms of invariantly determined patches. We briefly outline a low computational cost image-database indexing-system based on this representation for objects having complex shape-geometry.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131244309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-04DOI: 10.1109/ICCV.1998.710796
J. Sato, R. Cipolla
In this paper, we show that even if the camera is uncalibrated, and its translational motion is unknown, curved surfaces can be reconstructed from their apparent contours up to a 3D affine ambiguity. Furthermore, we show that even if the reconstruction is nonmetric (non-Euclidean), we can still extract useful information for many computer vision applications just from the apparent contours. We first show that if the camera undergoes pure translation (unknown direction and magnitude), the epipolar geometry can be recovered from the apparent contours without using any search or optimisation process. The extracted epipolar geometry is next used for reconstructing curved surfaces from the deformations of the apparent contours viewed from uncalibrated cameras. The result is applied to distinguishing curved surfaces from fixed features in images. It is also shown that the time-to-contact to the curved surfaces can be computed from simple measurements of the apparent contours. The proposed method is implemented and tested on real images of curved surfaces.
{"title":"Affine reconstruction of curved surfaces from uncalibrated views of apparent contours","authors":"J. Sato, R. Cipolla","doi":"10.1109/ICCV.1998.710796","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710796","url":null,"abstract":"In this paper, we show that even if the camera is uncalibrated, and its translational motion is unknown, curved surfaces can be reconstructed from their apparent contours up to a 3D affine ambiguity. Furthermore, we show that even if the reconstruction is nonmetric (non-Euclidean), we can still extract useful information for many computer vision applications just from the apparent contours. We first show that if the camera undergoes pure translation (unknown direction and magnitude), the epipolar geometry can be recovered from the apparent contours without using any search or optimisation process. The extracted epipolar geometry is next used for reconstructing curved surfaces from the deformations of the apparent contours viewed from uncalibrated cameras. The result is applied to distinguishing curved surfaces from fixed features in images. It is also shown that the time-to-contact to the curved surfaces can be computed from simple measurements of the apparent contours. The proposed method is implemented and tested on real images of curved surfaces.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133893662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}