In this paper we present an algorithm to perform interactive boolean operations on free-form solids bounded by surfels. We introduce a fast inside-outside test to check whether surfels lie within the bounds of another surfel-bounded solid. This enables us to add, subtract and intersect complex solids at interactive rates. Our algorithm is fast both in displaying and constructing the new geometry resulting from the boolean operation.We present a resampling operator to solve problems resulting from sharp edges in the resulting solid. The operator resamples the surfels intersecting with the surface of the other solid. This enables us to represent the sharp edges with great detail.We believe our algorithm to be an ideal tool for interactive editing of free-form solids.
{"title":"Interactive boolean operations on surfel-bounded solids","authors":"B. Adams, P. Dutré","doi":"10.1145/1201775.882320","DOIUrl":"https://doi.org/10.1145/1201775.882320","url":null,"abstract":"In this paper we present an algorithm to perform interactive boolean operations on free-form solids bounded by surfels. We introduce a fast inside-outside test to check whether surfels lie within the bounds of another surfel-bounded solid. This enables us to add, subtract and intersect complex solids at interactive rates. Our algorithm is fast both in displaying and constructing the new geometry resulting from the boolean operation.We present a resampling operator to solve problems resulting from sharp edges in the resulting solid. The operator resamples the surfels intersecting with the surface of the other solid. This enables us to represent the sharp edges with great detail.We believe our algorithm to be an ideal tool for interactive editing of free-form solids.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128183594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While 2D and 3D vector fields are ubiquitous in computational sciences, their use in graphics is often limited to regular grids, where computations are easily handled through finite-difference methods. In this paper, we propose a set of simple and accurate tools for the analysis of 3D discrete vector fields on arbitrary tetrahedral grids. We introduce a variational, multiscale decomposition of vector fields into three intuitive components: a divergence-free part, a curl-free part, and a harmonic part. We show how our discrete approach matches its well-known smooth analog, called the Helmotz-Hodge decomposition, and that the resulting computational tools have very intuitive geometric interpretation. We demonstrate the versatility of these tools in a series of applications, ranging from data visualization to fluid and deformable object simulation.
{"title":"Discrete multiscale vector field decomposition","authors":"Y. Tong, S. Lombeyda, A. N. Hirani, M. Desbrun","doi":"10.1145/1201775.882290","DOIUrl":"https://doi.org/10.1145/1201775.882290","url":null,"abstract":"While 2D and 3D vector fields are ubiquitous in computational sciences, their use in graphics is often limited to regular grids, where computations are easily handled through finite-difference methods. In this paper, we propose a set of simple and accurate tools for the analysis of 3D discrete vector fields on arbitrary tetrahedral grids. We introduce a variational, multiscale decomposition of vector fields into three intuitive components: a divergence-free part, a curl-free part, and a harmonic part. We show how our discrete approach matches its well-known smooth analog, called the Helmotz-Hodge decomposition, and that the resulting computational tools have very intuitive geometric interpretation. We demonstrate the versatility of these tools in a series of applications, ranging from data visualization to fluid and deformable object simulation.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114279417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nick Rasmussen, Duc Quang Nguyen, William A. Geiger, Ronald Fedkiw
In this paper, we present an efficient method for simulating highly detailed large scale participating media such as the nuclear explosions shown in figure 1. We capture this phenomena by simulating the motion of particles in a fluid dynamics generated velocity field. A novel aspect of this paper is the creation of highly detailed three-dimensional turbulent velocity fields at interactive rates using a low to moderate amount of memory. The key idea is the combination of two-dimensional high resolution physically based flow fields with a moderate sized three-dimensional Kolmogorov velocity field tiled periodically in space.
{"title":"Smoke simulation for large scale phenomena","authors":"Nick Rasmussen, Duc Quang Nguyen, William A. Geiger, Ronald Fedkiw","doi":"10.1145/1201775.882335","DOIUrl":"https://doi.org/10.1145/1201775.882335","url":null,"abstract":"In this paper, we present an efficient method for simulating highly detailed large scale participating media such as the nuclear explosions shown in figure 1. We capture this phenomena by simulating the motion of particles in a fluid dynamics generated velocity field. A novel aspect of this paper is the creation of highly detailed three-dimensional turbulent velocity fields at interactive rates using a low to moderate amount of memory. The key idea is the combination of two-dimensional high resolution physically based flow fields with a moderate sized three-dimensional Kolmogorov velocity field tiled periodically in space.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133139432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a technique for estimating the spatially-varying reflectance properties of a surface based on its appearance during a single pass of a linear light source. By using a linear light rather than a point light source as the illuminant, we are able to reliably observe and estimate the diffuse color, specular color, and specular roughness of each point of the surface. The reflectometry apparatus we use is simple and inexpensive to build, requiring a single direction of motion for the light source and a fixed camera viewpoint. Our model fitting technique first renders a reflectance table of how diffuse and specular reflectance lobes would appear under moving linear light source illumination. Then, for each pixel we compare its series of intensity values to the tabulated reflectance lobes to determine which reflectance model parameters most closely produce the observed reflectance values. Using two passes of the linear light source at different angles, we can also estimate per-pixel surface normals as well as the reflectance parameters. Additionally our system records a per-pixel height map for the object and estimates its per-pixel translucency. We produce real-time renderings of the captured objects using a custom hardware shading algorithm. We apply the technique to a test object exhibiting a variety of materials as well as to an illuminated manuscript with gold lettering. To demonstrate the technique's accuracy, we compare renderings of the captured models to real photographs of the original objects.
{"title":"Linear light source reflectometry","authors":"A. Gardner, C. Tchou, Tim Hawkins, P. Debevec","doi":"10.1145/1201775.882342","DOIUrl":"https://doi.org/10.1145/1201775.882342","url":null,"abstract":"This paper presents a technique for estimating the spatially-varying reflectance properties of a surface based on its appearance during a single pass of a linear light source. By using a linear light rather than a point light source as the illuminant, we are able to reliably observe and estimate the diffuse color, specular color, and specular roughness of each point of the surface. The reflectometry apparatus we use is simple and inexpensive to build, requiring a single direction of motion for the light source and a fixed camera viewpoint. Our model fitting technique first renders a reflectance table of how diffuse and specular reflectance lobes would appear under moving linear light source illumination. Then, for each pixel we compare its series of intensity values to the tabulated reflectance lobes to determine which reflectance model parameters most closely produce the observed reflectance values. Using two passes of the linear light source at different angles, we can also estimate per-pixel surface normals as well as the reflectance parameters. Additionally our system records a per-pixel height map for the object and estimates its per-pixel translucency. We produce real-time renderings of the captured objects using a custom hardware shading algorithm. We apply the technique to a test object exhibiting a variety of materials as well as to an illuminated manuscript with gold lettering. To demonstrate the technique's accuracy, we compare renderings of the captured models to real photographs of the original objects.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130562237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cutting up a complex object into simpler sub-objects is a fundamental problem in various disciplines. In image processing, images are segmented while in computational geometry, solid polyhedra are decomposed. In recent years, in computer graphics, polygonal meshes are decomposed into sub-meshes. In this paper we propose a novel hierarchical mesh decomposition algorithm. Our algorithm computes a decomposition into the meaningful components of a given mesh, which generally refers to segmentation at regions of deep concavities. The algorithm also avoids over-segmentation and jaggy boundaries between the components. Finally, we demonstrate the utility of the algorithm in control-skeleton extraction.
{"title":"Hierarchical mesh decomposition using fuzzy clustering and cuts","authors":"S. Katz, A. Tal","doi":"10.1145/1201775.882369","DOIUrl":"https://doi.org/10.1145/1201775.882369","url":null,"abstract":"Cutting up a complex object into simpler sub-objects is a fundamental problem in various disciplines. In image processing, images are segmented while in computational geometry, solid polyhedra are decomposed. In recent years, in computer graphics, polygonal meshes are decomposed into sub-meshes. In this paper we propose a novel hierarchical mesh decomposition algorithm. Our algorithm computes a decomposition into the meaningful components of a given mesh, which generally refers to segmentation at regions of deep concavities. The algorithm also avoids over-segmentation and jaggy boundaries between the components. Finally, we demonstrate the utility of the algorithm in control-skeleton extraction.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133936255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Munzner, François Guimbretière, S. Tasiran, Li Zhang, Yunhong Zhou
Structural comparison of large trees is a difficult task that is only partially supported by current visualization techniques, which are mainly designed for browsing. We present TreeJuxtaposer, a system designed to support the comparison task for large trees of several hundred thousand nodes. We introduce the idea of "guaranteed visibility", where highlighted areas are treated as landmarks that must remain visually apparent at all times. We propose a new methodology for detailed structural comparison between two trees and provide a new nearly-linear algorithm for computing the best corresponding node from one tree to another. In addition, we present a new rectilinear Focus+Context technique for navigation that is well suited to the dynamic linking of side-by-side views while guaranteeing landmark visibility and constant frame rates. These three contributions result in a system delivering a fluid exploration experience that scales both in the size of the dataset and the number of pixels in the display. We have based the design decisions for our system on the needs of a target audience of biologists who must understand the structural details of many phylogenetic, or evolutionary, trees. Our tool is also useful in many other application domains where tree comparison is needed, ranging from network management to call graph optimization to genealogy.
{"title":"TreeJuxtaposer: scalable tree comparison using Focus+Context with guaranteed visibility","authors":"T. Munzner, François Guimbretière, S. Tasiran, Li Zhang, Yunhong Zhou","doi":"10.1145/1201775.882291","DOIUrl":"https://doi.org/10.1145/1201775.882291","url":null,"abstract":"Structural comparison of large trees is a difficult task that is only partially supported by current visualization techniques, which are mainly designed for browsing. We present TreeJuxtaposer, a system designed to support the comparison task for large trees of several hundred thousand nodes. We introduce the idea of \"guaranteed visibility\", where highlighted areas are treated as landmarks that must remain visually apparent at all times. We propose a new methodology for detailed structural comparison between two trees and provide a new nearly-linear algorithm for computing the best corresponding node from one tree to another. In addition, we present a new rectilinear Focus+Context technique for navigation that is well suited to the dynamic linking of side-by-side views while guaranteeing landmark visibility and constant frame rates. These three contributions result in a system delivering a fluid exploration experience that scales both in the size of the dataset and the number of pixels in the display. We have based the design decisions for our system on the needs of a target audience of biologists who must understand the structural details of many phylogenetic, or evolutionary, trees. Our tool is also useful in many other application domains where tree comparison is needed, ranging from network management to call graph optimization to genealogy.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128283372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time animation of human-like characters is an active research area in computer graphics. The conventional approaches have, however, hardly dealt with the rhythmic patterns of motions, which are essential in handling rhythmic motions such as dancing and locomotive motions. In this paper, we present a novel scheme for synthesizing a new motion from unlabelled example motions while preserving their rhythmic pattern. Our scheme first captures the motion beats from the example motions to extract the basic movements and their transitions. Based on those data, our scheme then constructs a movement transition graph that represents the example motions. Given an input sound signal, our scheme finally synthesizes a novel motion in an on-line manner while traversing the motion transition graph, which is synchronized with the input sound signal and also satisfies kinematic constraints given explicitly and implicitly. Through experiments, we have demonstrated that our scheme can effectively produce a variety of rhythmic motions.
{"title":"Rhythmic-motion synthesis based on motion-beat analysis","authors":"Tae-Hoon Kim, Sang Il Park, Sung-yong Shin","doi":"10.1145/1201775.882283","DOIUrl":"https://doi.org/10.1145/1201775.882283","url":null,"abstract":"Real-time animation of human-like characters is an active research area in computer graphics. The conventional approaches have, however, hardly dealt with the rhythmic patterns of motions, which are essential in handling rhythmic motions such as dancing and locomotive motions. In this paper, we present a novel scheme for synthesizing a new motion from unlabelled example motions while preserving their rhythmic pattern. Our scheme first captures the motion beats from the example motions to extract the basic movements and their transitions. Based on those data, our scheme then constructs a movement transition graph that represents the example motions. Given an input sound signal, our scheme finally synthesizes a novel motion in an on-line manner while traversing the motion transition graph, which is synchronized with the input sound signal and also satisfies kinematic constraints given explicitly and implicitly. Through experiments, we have demonstrated that our scheme can effectively produce a variety of rhythmic motions.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"655 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133061755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an image-based technique to relight real objects illuminated by a 4D incident light field, representing the illumination of an environment. By exploiting the richness in angular and spatial variation of the light field, objects can be relit with a high degree of realism.We record photographs of an object, illuminated from various positions and directions, using a projector mounted on a gantry as a moving light source. The resulting basis images are used to create a subset of the full reflectance field of the object. Using this reflectance field, we can create an image of the object, relit with any incident light field and observed from a flxed camera position.To maintain acceptable recording times and reduce the amount of data, we propose an efficient data acquisition method.Since the object can be relit with a 4D incident light field, illumination effects encoded in the light field, such as shafts of shadow or spot light effects, can be realized.
{"title":"Relighting with 4D incident light fields","authors":"Vincent Masselus, P. Peers, P. Dutré, Y. Willems","doi":"10.1145/1201775.882315","DOIUrl":"https://doi.org/10.1145/1201775.882315","url":null,"abstract":"We present an image-based technique to relight real objects illuminated by a 4D incident light field, representing the illumination of an environment. By exploiting the richness in angular and spatial variation of the light field, objects can be relit with a high degree of realism.We record photographs of an object, illuminated from various positions and directions, using a projector mounted on a gantry as a moving light source. The resulting basis images are used to create a subset of the full reflectance field of the object. Using this reflectance field, we can create an image of the object, relit with any incident light field and observed from a flxed camera position.To maintain acceptable recording times and reduce the amount of data, we propose an efficient data acquisition method.Since the object can be relit with a 4D incident light field, illumination effects encoded in the light field, such as shafts of shadow or spot light effects, can be realized.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115430919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ignacio Llamas, ByungMoon Kim, Joshua Gargus, J. Rossignac, Chris Shaw
A free-form deformation that warps a surface or solid may be specified in terms of one or several point-displacement constraints that must be interpolated by the deformation. The Twister approach introduced here, adds the capability to impose an orientation change, adding three rotational constraints, at each displaced point. Furthermore, it solves for a space warp that simultaneously interpolates two sets of such displacement and orientation constraints. With a 6 DoF magnetic tracker in each hand, the user may grab two points on or near the surface of an object and simultaneously drag them to new locations while rotating the trackers to tilt, bend, or twist the shape near the displaced points. Using a new formalism based on a weighted average of screw displacements, Twister computes in realtime a smooth deformation, whose effect decays with distance from the grabbed points, simultaneously interpolating the 12 constraints. It is continuously applied to the shape, providing realtime graphic feedback. The two-hand interface and the resulting deformation are intuitive and hence offer an effective direct manipulation tool for creating or modifying 3D shapes.
{"title":"Twister: a space-warp operator for the two-handed editing of 3D shapes","authors":"Ignacio Llamas, ByungMoon Kim, Joshua Gargus, J. Rossignac, Chris Shaw","doi":"10.1145/1201775.882323","DOIUrl":"https://doi.org/10.1145/1201775.882323","url":null,"abstract":"A free-form deformation that warps a surface or solid may be specified in terms of one or several point-displacement constraints that must be interpolated by the deformation. The Twister approach introduced here, adds the capability to impose an orientation change, adding three rotational constraints, at each displaced point. Furthermore, it solves for a space warp that simultaneously interpolates two sets of such displacement and orientation constraints. With a 6 DoF magnetic tracker in each hand, the user may grab two points on or near the surface of an object and simultaneously drag them to new locations while rotating the trackers to tilt, bend, or twist the shape near the displaced points. Using a new formalism based on a weighted average of screw displacements, Twister computes in realtime a smooth deformation, whose effect decays with distance from the grabbed points, simultaneously interpolating the 12 constraints. It is continuously applied to the shape, providing realtime graphic feedback. The two-hand interface and the resulting deformation are intuitive and hence offer an effective direct manipulation tool for creating or modifying 3D shapes.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116270226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steve Marschner, H. Jensen, Mike Cammarano, Steven Worley, P. Hanrahan
Light scattering from hair is normally simulated in computer graphics using Kajiya and Kay's classic phenomenological model. We have made new measurements of scattering from individual hair fibers that exhibit visually significant effects not predicted by Kajiya and Kay's model. Our measurements go beyond previous hair measurements by examining out-of-plane scattering, and together with this previous work they show a multiple specular highlight and variation in scattering with rotation about the fiber axis. We explain the sources of these effects using a model of a hair fiber as a transparent elliptical cylinder with an absorbing interior and a surface covered with tilted scales. Based on an analytical scattering function for a circular cylinder, we propose a practical shading model for hair that qualitatively matches the scattering behavior shown in the measurements. In a comparison between a photograph and rendered images, we demonstrate the new model's ability to match the appearance of real hair.
{"title":"Light scattering from human hair fibers","authors":"Steve Marschner, H. Jensen, Mike Cammarano, Steven Worley, P. Hanrahan","doi":"10.1145/1201775.882345","DOIUrl":"https://doi.org/10.1145/1201775.882345","url":null,"abstract":"Light scattering from hair is normally simulated in computer graphics using Kajiya and Kay's classic phenomenological model. We have made new measurements of scattering from individual hair fibers that exhibit visually significant effects not predicted by Kajiya and Kay's model. Our measurements go beyond previous hair measurements by examining out-of-plane scattering, and together with this previous work they show a multiple specular highlight and variation in scattering with rotation about the fiber axis. We explain the sources of these effects using a model of a hair fiber as a transparent elliptical cylinder with an absorbing interior and a surface covered with tilted scales. Based on an analytical scattering function for a circular cylinder, we propose a practical shading model for hair that qualitatively matches the scattering behavior shown in the measurements. In a comparison between a photograph and rendered images, we demonstrate the new model's ability to match the appearance of real hair.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123550798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}