Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10022
D. Menotti, D. Borges, A. Britto
This paper presents a modification with further experiments of a segmentation algorithm based on feature selection in wavelet space of ours [9]. The aim is to automatically separate in postal envelopes the regions related to background, stamps, rubber stamps, and the address blocks. First, a typical image of a postal envelope is decomposed using Mallat algorithm and Haar basis. High frequency channel outputs are analyzed to locate salient points in order to separate the background. A statistical hypothesis test is taken to decide upon more consistent regions in order to clean out some noise left. The selected points are projected back to the original gray level image, where the evidence from the wavelet space is used to start a growing process to include the pixels more likely to belong to the regions of stamps, rubber stamps, and written area. We have modified the growing process controlled by the salient points and the results were greatly improved reaching success rate of over 97%. Experiments are run using original postal envelopes from the Brazilian Post Office Agency, and here we report results on 440 images with many different layouts and backgrounds.
{"title":"Salient Features and Hypothesis Testing: evaluating a novel approach for segmentation and address block location","authors":"D. Menotti, D. Borges, A. Britto","doi":"10.1109/CVPRW.2003.10022","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10022","url":null,"abstract":"This paper presents a modification with further experiments of a segmentation algorithm based on feature selection in wavelet space of ours [9]. The aim is to automatically separate in postal envelopes the regions related to background, stamps, rubber stamps, and the address blocks. First, a typical image of a postal envelope is decomposed using Mallat algorithm and Haar basis. High frequency channel outputs are analyzed to locate salient points in order to separate the background. A statistical hypothesis test is taken to decide upon more consistent regions in order to clean out some noise left. The selected points are projected back to the original gray level image, where the evidence from the wavelet space is used to start a growing process to include the pixels more likely to belong to the regions of stamps, rubber stamps, and written area. We have modified the growing process controlled by the salient points and the results were greatly improved reaching success rate of over 97%. Experiments are run using original postal envelopes from the Brazilian Post Office Agency, and here we report results on 440 images with many different layouts and backgrounds.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128173316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10079
H. Bakstein, T. Pajdla
We present an approach to rendering stereo pairs of views from a set of omnidirectional mosaic images allowing arbitrary viewing direction and vergence angle of two eyes of a viewer. Moreover, we allow the viewer to move his head aside to see behind occluding objects. We propose a representation of the scene in a set of omnidirectional mosaic images composed from a sequence of images acquired by an omnidirectional camera equipped with a lens with a field of view of 183°. The proposed representation allows fast access to high resolution mosaic images and efficient representation in the memory. The proposed method can be applied in a representation of a real scene, where the viewer is supposed to stand at one spot and look around.
{"title":"Rendering novel views from a set of omnidirectional mosaic images","authors":"H. Bakstein, T. Pajdla","doi":"10.1109/CVPRW.2003.10079","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10079","url":null,"abstract":"We present an approach to rendering stereo pairs of views from a set of omnidirectional mosaic images allowing arbitrary viewing direction and vergence angle of two eyes of a viewer. Moreover, we allow the viewer to move his head aside to see behind occluding objects. We propose a representation of the scene in a set of omnidirectional mosaic images composed from a sequence of images acquired by an omnidirectional camera equipped with a lens with a field of view of 183°. The proposed representation allows fast access to high resolution mosaic images and efficient representation in the memory. The proposed method can be applied in a representation of a real scene, where the viewer is supposed to stand at one spot and look around.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130151825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10074
O. Shakernia, R. Vidal, S. Sastry
The current state-of-the-art for egomotion estimation with omnidirectional cameras is to map the optical flow to the sphere and then apply egomotion algorithms for spherical projection. In this paper, we propose to back-project image points to a virtual curved retina that is intrinsic to the geometry of the central panoramic camera, and compute the optical flow on this retina: the so-called back-projection flow. We show that well-known egomotion algorithms can be easily adapted to work with the back-projection flow. We present extensive simulation results showing that in the presence of noise, egomotion algorithms perform better by using back-projection flow when the camera translation is in the X-Y plane. Thus, the proposed method is preferable in applications where there is no Z-axis translation, such as ground robot navigation.
{"title":"Omnidirectional Egomotion Estimation From Back-projection Flow","authors":"O. Shakernia, R. Vidal, S. Sastry","doi":"10.1109/CVPRW.2003.10074","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10074","url":null,"abstract":"The current state-of-the-art for egomotion estimation with omnidirectional cameras is to map the optical flow to the sphere and then apply egomotion algorithms for spherical projection. In this paper, we propose to back-project image points to a virtual curved retina that is intrinsic to the geometry of the central panoramic camera, and compute the optical flow on this retina: the so-called back-projection flow. We show that well-known egomotion algorithms can be easily adapted to work with the back-projection flow. We present extensive simulation results showing that in the presence of noise, egomotion algorithms perform better by using back-projection flow when the camera translation is in the X-Y plane. Thus, the proposed method is preferable in applications where there is no Z-axis translation, such as ground robot navigation.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132438175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10029
Yefeng Zheng, Huiping Li, D. Doermann
Background lines often exist in textual documents. It is important to detect and remove those lines so text can be easily segmented and recognized. A stochastic model is proposed in this paper which incorporates the high level contextual information to detect severely broken lines. We observed that 1) background lines are parallel, and 2) the vertical gaps between any two neighboring lines are roughly equal with small variance. The novelty of our algorithm is we use a HMM model to model the projection profile along the estimated skew angle, and estimate the optimal positions of all background lines simultaneously based on the Viterbi algorithm. Compared with our previous deterministic model based approach [15], the new method is much more robust and detects about 96.8% background lines correctly in our Arabic document database.
{"title":"Background Line Detection with A Stochastic Model","authors":"Yefeng Zheng, Huiping Li, D. Doermann","doi":"10.1109/CVPRW.2003.10029","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10029","url":null,"abstract":"Background lines often exist in textual documents. It is important to detect and remove those lines so text can be easily segmented and recognized. A stochastic model is proposed in this paper which incorporates the high level contextual information to detect severely broken lines. We observed that 1) background lines are parallel, and 2) the vertical gaps between any two neighboring lines are roughly equal with small variance. The novelty of our algorithm is we use a HMM model to model the projection profile along the estimated skew angle, and estimate the optimal positions of all background lines simultaneously based on the Viterbi algorithm. Compared with our previous deterministic model based approach [15], the new method is much more robust and detects about 96.8% background lines correctly in our Arabic document database.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132954364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10080
C. Demonceaux, D. Kachi-Akkouche
The motion estimation computation in the image sequences is a significant problem in image processing. Many researches were carried out on this subject in the image sequences with a traditional camera. These techniques were applied in omnidirectional image sequences. But the majority of these methods are not adapted to this kind of sequences. Indeed they suppose the flow is locally constant but the omnidirectional sensor generates distortions which contradict this assumption. In this paper, we propose a fast method to compute the optical flow in omnidirectional image sequences. This method is based on a Brightness Change Constraint Equation decomposition on a wavelet basis. To take account of the distortions created by the sensor, we replace the assumption of flow locally constant used in traditional images by a hypothesis more appropriate.
{"title":"Optical flow estimation in omnidirectional images using wavelet approach","authors":"C. Demonceaux, D. Kachi-Akkouche","doi":"10.1109/CVPRW.2003.10080","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10080","url":null,"abstract":"The motion estimation computation in the image sequences is a significant problem in image processing. Many researches were carried out on this subject in the image sequences with a traditional camera. These techniques were applied in omnidirectional image sequences. But the majority of these methods are not adapted to this kind of sequences. Indeed they suppose the flow is locally constant but the omnidirectional sensor generates distortions which contradict this assumption. In this paper, we propose a fast method to compute the optical flow in omnidirectional image sequences. This method is based on a Brightness Change Constraint Equation decomposition on a wavelet basis. To take account of the distortions created by the sensor, we replace the assumption of flow locally constant used in traditional images by a hypothesis more appropriate.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"37 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114037534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10018
H. Scharr, M. Felsberg, Per-Erik Forssén
Many nano-scale sensing techniques and image processing applications are characterized by noisy, or corrupted, image data. Unlike typical camera-based computer vision imagery where noise can be modeled quite well as additive, zero-mean white or Gaussian noise, nano-scale images suffer from low intensities and thus mainly from Poisson-like noise. In addition, noise distributions can not be considered symmetric due to the limited gray value range of sensors and resulting truncation of over- and underflows. In this paper we adapt B-spline channel smoothing to meet the requirements imposed by this noise characteristics. Like PDE-based diffusion schemes it has a close connection to robust statistics but, unlike diffusion schemes, it can handle non-zero-mean noises. In order to account for the multiplicative nature of Poisson noise the variance of the smoothing kernels applied to each channel is properly adapted. We demonstrate the properties of this technique on noisy nano-scale images of silicon structures and compare to anisotropic diffusion schemes that were specially adapted to this data.
{"title":"Noise Adaptive Channel Smoothing of Low-Dose Images","authors":"H. Scharr, M. Felsberg, Per-Erik Forssén","doi":"10.1109/CVPRW.2003.10018","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10018","url":null,"abstract":"Many nano-scale sensing techniques and image processing applications are characterized by noisy, or corrupted, image data. Unlike typical camera-based computer vision imagery where noise can be modeled quite well as additive, zero-mean white or Gaussian noise, nano-scale images suffer from low intensities and thus mainly from Poisson-like noise. In addition, noise distributions can not be considered symmetric due to the limited gray value range of sensors and resulting truncation of over- and underflows. In this paper we adapt B-spline channel smoothing to meet the requirements imposed by this noise characteristics. Like PDE-based diffusion schemes it has a close connection to robust statistics but, unlike diffusion schemes, it can handle non-zero-mean noises. In order to account for the multiplicative nature of Poisson noise the variance of the smoothing kernels applied to each channel is properly adapted. We demonstrate the properties of this technique on noisy nano-scale images of silicon structures and compare to anisotropic diffusion schemes that were specially adapted to this data.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10031
Christian M. Strohmaier, Christoph Ringlstetter, K. Schulz, S. Mihov
Systems for postcorrection of OCR-results can be fine tuned and adapted to new recognition tasks in many respects. One issue is the selection and adaption of a suitable background dictionary. Another issue is the choice of a correction model, which includes, among other decisions, the selection of an appropriate distance measure for strings and the choice of a scoring function for ranking distinct correction alternatives. When combining the results obtained from distinct OCR engines, further parameters have to be fixed. Due to all these degrees of freedom, adaption and fine tuning of systems for lexical postcorrection is a difficult process. Here we describe a visual and interactive tool that semi-automates the generation of ground truth data, partially automates adjustment of parameters, yields active support for error analysis and thus helps to find correction strategies that lead to high accuracy with realistic effort.
{"title":"A visual and interactive tool for optimizing lexical postcorrection of OCR results","authors":"Christian M. Strohmaier, Christoph Ringlstetter, K. Schulz, S. Mihov","doi":"10.1109/CVPRW.2003.10031","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10031","url":null,"abstract":"Systems for postcorrection of OCR-results can be fine tuned and adapted to new recognition tasks in many respects. One issue is the selection and adaption of a suitable background dictionary. Another issue is the choice of a correction model, which includes, among other decisions, the selection of an appropriate distance measure for strings and the choice of a scoring function for ranking distinct correction alternatives. When combining the results obtained from distinct OCR engines, further parameters have to be fixed. Due to all these degrees of freedom, adaption and fine tuning of systems for lexical postcorrection is a difficult process. Here we describe a visual and interactive tool that semi-automates the generation of ground truth data, partially automates adjustment of parameters, yields active support for error analysis and thus helps to find correction strategies that lead to high accuracy with realistic effort.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126202276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10013
A. Zalesny, D. D. Maur, R. Paget, M. Vergauwen, L. Gool
In the construction of 3D models of archaeological sites, especially during the anastylosis (piecing together dismembered remains of buildings), much more emphasis has been placed on the creation of the 3D shapes rather than on their textures. Nevertheless, the overall visual impression will often depend more on these textures than on the precision of the underlying geometry. This paper proposes a hierarchical texture modeling and synthesis technique to simulate the intricate appearances of building materials and landscapes. A macrotexture or "label map" prescribes the layout of microtextures or "subtextures". The system takes example images, e.g. of a certain vegetation landscape, as input and generates the corresponding composite texture models. From such models, arbitrary amounts of similar, non-repetitive texture can be generated (i.e. without verbatim copying). The creation of the composite texture models follows a kind of bootstrap procedure, where simple texture features help to generate the label map and then more complicated texture descriptions are called on for the subtextures.
{"title":"Realistic Textures for Virtual Anastylosis","authors":"A. Zalesny, D. D. Maur, R. Paget, M. Vergauwen, L. Gool","doi":"10.1109/CVPRW.2003.10013","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10013","url":null,"abstract":"In the construction of 3D models of archaeological sites, especially during the anastylosis (piecing together dismembered remains of buildings), much more emphasis has been placed on the creation of the 3D shapes rather than on their textures. Nevertheless, the overall visual impression will often depend more on these textures than on the precision of the underlying geometry. This paper proposes a hierarchical texture modeling and synthesis technique to simulate the intricate appearances of building materials and landscapes. A macrotexture or \"label map\" prescribes the layout of microtextures or \"subtextures\". The system takes example images, e.g. of a certain vegetation landscape, as input and generates the corresponding composite texture models. From such models, arbitrary amounts of similar, non-repetitive texture can be generated (i.e. without verbatim copying). The creation of the composite texture models follows a kind of bootstrap procedure, where simple texture features help to generate the label map and then more complicated texture descriptions are called on for the subtextures.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128015168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10077
O. Shakernia, R. Vidal, S. Sastry
In applications of egomotion estimation, such as real-time vision-based navigation, one must deal with the double-edged sword of small relative motions between images. On one hand, tracking feature points is easier, while on the other, two-view structure-from-motion algorithms are poorly conditioned due to the low signal-to-noise ratio. In this paper, we derive a multi-frame structure from motion algorithm for calibrated central panoramic cameras. Our algorithm avoids the conditioning problem by explicitly incorporating the small baseline assumption in the algorithm's design. The proposed algorithm is linear, amenable to real-time implementation, and performs well in the small baseline domain for which it is designed.
{"title":"Structure from Small Baseline Motion with Central Panoramic Cameras","authors":"O. Shakernia, R. Vidal, S. Sastry","doi":"10.1109/CVPRW.2003.10077","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10077","url":null,"abstract":"In applications of egomotion estimation, such as real-time vision-based navigation, one must deal with the double-edged sword of small relative motions between images. On one hand, tracking feature points is easier, while on the other, two-view structure-from-motion algorithms are poorly conditioned due to the low signal-to-noise ratio. In this paper, we derive a multi-frame structure from motion algorithm for calibrated central panoramic cameras. Our algorithm avoids the conditioning problem by explicitly incorporating the small baseline assumption in the algorithm's design. The proposed algorithm is linear, amenable to real-time implementation, and performs well in the small baseline domain for which it is designed.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130962900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10004
P. Allen, Alejandro J. Troccoli, Benjamin Smith, I. Stamos, Stephen Murray
Preserving cultural heritage and historic sites is an important problem. These sites are subject to erosion, vandalism, and as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using 3-D model building technology as they currently are, so preservationists can track changes, foresee structural problems, and allow a wider audience to "virtually" see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This paper discusses new methods that can reduce the time to build a model using automatic methods. Examples of these methods are shown in reconstructing a model of the Cathedral of Saint-Pierre in Beauvais, France.
{"title":"The Beauvais Cathedral Project","authors":"P. Allen, Alejandro J. Troccoli, Benjamin Smith, I. Stamos, Stephen Murray","doi":"10.1109/CVPRW.2003.10004","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10004","url":null,"abstract":"Preserving cultural heritage and historic sites is an important problem. These sites are subject to erosion, vandalism, and as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using 3-D model building technology as they currently are, so preservationists can track changes, foresee structural problems, and allow a wider audience to \"virtually\" see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This paper discusses new methods that can reduce the time to build a model using automatic methods. Examples of these methods are shown in reconstructing a model of the Cathedral of Saint-Pierre in Beauvais, France.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123761599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}