Ke Chen, Charly Collin, Ajit Hakke Patil, S. Pattanaik
Accurately modeling BRDF for real world materials is important and challenging for realistic image synthesis. For a majority of materials most of the incident light enters the material, undergoes multiple scattering under the surface before exiting the material's surface as reflection. Physically correct modeling of such BRDF must take into account of this subsurface volumetric light transport. Most of the accurate numerical solution methods (ex: Monte Carlo, Discrete Ordinate Methods (DOM)) for volumetric light transport compute radiance field for the whole volume, and are expensive. As BRDF ultimately relates only the outgoing radiation field at the boundary to the incident radiation, radiation field computed for the bulk of the material does not provide any useful information and hence the effort involved in computing them can be considered as wasteful. So for efficient BRDF computation any method that allows us to compute the radiance field only at the boundary would be a preferable choice. The search for such a method led us to the Ambartsumian's method [Sobolev 1975; Mishchenko et al. 1999].
准确地为真实世界的材料建模BRDF对于真实的图像合成是重要的和具有挑战性的。对于大多数材料,大部分入射光进入材料,在以反射的形式离开材料表面之前,在材料表面下经历多次散射。这种BRDF的物理正确建模必须考虑到这种地下体积光传输。体积光输运的精确数值求解方法(如蒙特卡罗、离散坐标法)大多是计算整个体积的辐射场,且价格昂贵。由于BRDF最终只将边界处的出射辐射场与入射辐射联系起来,因此计算大部分材料的辐射场并不能提供任何有用的信息,因此计算它们所涉及的努力可以被认为是浪费。因此,对于有效的BRDF计算,任何允许我们仅在边界处计算辐射场的方法都是更好的选择。对这种方法的探索使我们找到了Ambartsumian的方法[Sobolev 1975;Mishchenko et al. 1999]。
{"title":"A practical model for computing the BRDF of real world materials","authors":"Ke Chen, Charly Collin, Ajit Hakke Patil, S. Pattanaik","doi":"10.1145/2448196.2448228","DOIUrl":"https://doi.org/10.1145/2448196.2448228","url":null,"abstract":"Accurately modeling BRDF for real world materials is important and challenging for realistic image synthesis. For a majority of materials most of the incident light enters the material, undergoes multiple scattering under the surface before exiting the material's surface as reflection. Physically correct modeling of such BRDF must take into account of this subsurface volumetric light transport. Most of the accurate numerical solution methods (ex: Monte Carlo, Discrete Ordinate Methods (DOM)) for volumetric light transport compute radiance field for the whole volume, and are expensive. As BRDF ultimately relates only the outgoing radiation field at the boundary to the incident radiation, radiation field computed for the bulk of the material does not provide any useful information and hence the effort involved in computing them can be considered as wasteful. So for efficient BRDF computation any method that allows us to compute the radiance field only at the boundary would be a preferable choice. The search for such a method led us to the Ambartsumian's method [Sobolev 1975; Mishchenko et al. 1999].","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"9 1","pages":"180"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79500720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marilena Maule, J. Comba, Rafael P. Torchelsen, R. Bastos
Hybrid transparency is an approach for real-time approximation of order-independent transparency. Our hybrid approach combines an accurate compositing, of a few core transparent layers, with a quick approximation, for the remaining layers. Its main advantage, the ability to operate in bounded memory without noticeable artifacts, enables its usage with high scene complexity and image resolution, which other approaches fail to handle. Hybrid transparency is suitable for highly-parallel execution, can be implemented in current GPUs and further improved, with minimal architecture changes. We present quality, memory, and performance analysis and comparisons which demonstrate that hybrid transparency is able to generate high-quality images at competitive frames rates and with the lowest memory consumption among comparable OIT techniques.
{"title":"Hybrid transparency","authors":"Marilena Maule, J. Comba, Rafael P. Torchelsen, R. Bastos","doi":"10.1145/2448196.2448212","DOIUrl":"https://doi.org/10.1145/2448196.2448212","url":null,"abstract":"Hybrid transparency is an approach for real-time approximation of order-independent transparency. Our hybrid approach combines an accurate compositing, of a few core transparent layers, with a quick approximation, for the remaining layers. Its main advantage, the ability to operate in bounded memory without noticeable artifacts, enables its usage with high scene complexity and image resolution, which other approaches fail to handle. Hybrid transparency is suitable for highly-parallel execution, can be implemented in current GPUs and further improved, with minimal architecture changes. We present quality, memory, and performance analysis and comparisons which demonstrate that hybrid transparency is able to generate high-quality images at competitive frames rates and with the lowest memory consumption among comparable OIT techniques.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"36 1","pages":"103-118"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85643765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a technique for interactively deforming and colliding with mesostructures at a per-texel level. It is compatible with a broad range of existing mesostructure rendering techniques including both safe and unsafe ray-height field intersection algorithms. This technique integrates well with existing physics engines and is able to reduce traditional 3D geometrical deformations (vertex-based) to 2D image space operations (pixel-based) that are parallelized on a GPU without CPU-GPU data shuffling. Additionally, surface and material properties may be specified at a per-texel level enabling a mesostructure to possess varying attributes intrinsic to its surface and collision behavior; furthermore, this offers an image-based alternative to traditional decals. This technique provides a simple way to make almost every surface in a virtual world responsive to user actions and events. It requires no preprocessing time and storage requirements of one additional texture or less. The algorithm uses existing displacement map algorithms as well as existing physics engines and can be easily incorporated into new or existing game pipelines.
{"title":"Interactive mesostructures","authors":"S. Nykl, C. Mourning, D. Chelberg","doi":"10.1145/2448196.2448202","DOIUrl":"https://doi.org/10.1145/2448196.2448202","url":null,"abstract":"This paper presents a technique for interactively deforming and colliding with mesostructures at a per-texel level. It is compatible with a broad range of existing mesostructure rendering techniques including both safe and unsafe ray-height field intersection algorithms. This technique integrates well with existing physics engines and is able to reduce traditional 3D geometrical deformations (vertex-based) to 2D image space operations (pixel-based) that are parallelized on a GPU without CPU-GPU data shuffling. Additionally, surface and material properties may be specified at a per-texel level enabling a mesostructure to possess varying attributes intrinsic to its surface and collision behavior; furthermore, this offers an image-based alternative to traditional decals. This technique provides a simple way to make almost every surface in a virtual world responsive to user actions and events. It requires no preprocessing time and storage requirements of one additional texture or less. The algorithm uses existing displacement map algorithms as well as existing physics engines and can be easily incorporated into new or existing game pipelines.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"31 1","pages":"37-44"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78067903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Light propagation in scenes with translucent objects is hard to model efficiently for interactive applications. The inter-reflections between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflections or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflections and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form-factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.
{"title":"A practical analytic model for the radiosity of translucent scenes","authors":"Yu Sheng, Yulong Shi, Lili Wang, S. Narasimhan","doi":"10.1145/2448196.2448206","DOIUrl":"https://doi.org/10.1145/2448196.2448206","url":null,"abstract":"Light propagation in scenes with translucent objects is hard to model efficiently for interactive applications. The inter-reflections between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflections or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflections and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form-factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"70 1","pages":"63-70"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76539512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, applications using human body motion data have increased with the progress in motion capture technologies and the spread of practical motion capture systems. In addition, touch panels have become popularized as input devices that can be used easily and intuitively. Taking advantage of these trends, we have been developing a support system for dance creation using motion data [A. Soga 2009]. This paper describes a motion synthesis system for dance using a tablet computer. Our system provides functions to synthesize motions, and users can control it with a tablet. The users create short choreographies by selecting certain body-part motion clips and composing them as whole-body motions. The system allows users to select each motion clip and preview it by 3DCG in real time. Some motion clips can be selected by flicking the tablet screen, and they can be blended to a base motion. Other motion clips of body parts can replace the corresponding part of the base motion. Figure 1 shows an overview of the system.
{"title":"A motion synthesis system for dance using a tablet","authors":"A. Soga, Sakiko Matsumoto","doi":"10.1145/2448196.2448226","DOIUrl":"https://doi.org/10.1145/2448196.2448226","url":null,"abstract":"Recently, applications using human body motion data have increased with the progress in motion capture technologies and the spread of practical motion capture systems. In addition, touch panels have become popularized as input devices that can be used easily and intuitively. Taking advantage of these trends, we have been developing a support system for dance creation using motion data [A. Soga 2009]. This paper describes a motion synthesis system for dance using a tablet computer. Our system provides functions to synthesize motions, and users can control it with a tablet. The users create short choreographies by selecting certain body-part motion clips and composing them as whole-body motions. The system allows users to select each motion clip and preview it by 3DCG in real time. Some motion clips can be selected by flicking the tablet screen, and they can be blended to a base motion. Other motion clips of body parts can replace the corresponding part of the base motion. Figure 1 shows an overview of the system.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"24 1","pages":"178"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87605100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new algorithm for encoding low dynamic range images into fixed-rate texture compression formats. Our approach provides orders of magnitude improvements in speed over existing publicly-available compressors, while generating high quality results. The algorithm is applicable to any fixed-rate texture encoding scheme based on Block Truncation Coding and we use it to compress images into the OpenGL BPTC format. The underlying technique uses an axis-aligned bounding box to estimate the proper partitioning of a texel block and performs a generalized cluster fit to compute the endpoint approximation. This approximation can be further refined using simulated annealing. The algorithm is inherently parallel and scales with the number of processor cores. We highlight its performance on low-frequency game textures and the high frequency Kodak Test Image Suite.
{"title":"FasTC: accelerated fixed-rate texture encoding","authors":"Pavel Krajcevski, Adam T. Lake, Dinesh Manocha","doi":"10.1145/2448196.2448218","DOIUrl":"https://doi.org/10.1145/2448196.2448218","url":null,"abstract":"We present a new algorithm for encoding low dynamic range images into fixed-rate texture compression formats. Our approach provides orders of magnitude improvements in speed over existing publicly-available compressors, while generating high quality results. The algorithm is applicable to any fixed-rate texture encoding scheme based on Block Truncation Coding and we use it to compress images into the OpenGL BPTC format. The underlying technique uses an axis-aligned bounding box to estimate the proper partitioning of a texel block and performs a generalized cluster fit to compute the endpoint approximation. This approximation can be further refined using simulated annealing. The algorithm is inherently parallel and scales with the number of processor cores. We highlight its performance on low-frequency game textures and the high frequency Kodak Test Image Suite.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"10 1","pages":"137-144"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87548486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many research domains, such as mechanical engineering, game development and virtual reality, a typical output model is usually produced in a multi-object manner for efficient data management. To have a complete description of the whole model, tens of thousands of, or even millions of, such objects are created, which makes the entire dataset exceptionally complex. Consequently, visualizing the model becomes a computationally intensive process that impedes a real-time rendering and interaction.
{"title":"Integrating occlusion culling with parallel LOD for rendering complex 3D environments on GPU","authors":"Chao Peng, Yong Cao","doi":"10.1145/2448196.2448235","DOIUrl":"https://doi.org/10.1145/2448196.2448235","url":null,"abstract":"In many research domains, such as mechanical engineering, game development and virtual reality, a typical output model is usually produced in a multi-object manner for efficient data management. To have a complete description of the whole model, tens of thousands of, or even millions of, such objects are created, which makes the entire dataset exceptionally complex. Consequently, visualizing the model becomes a computationally intensive process that impedes a real-time rendering and interaction.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"17 1","pages":"187"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89467960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the design space for real-time photon density estimation, the key step of rendering global illumination (GI) via photon mapping. We then detail and analyze efficient GPU implementations of four best-of-breed algorithms. All produce reasonable results on NVIDIA GeForce 670 at 1920 × 1080 for complex scenes with multiple-bounce diffuse effects, caustics, and glossy reflection in real-time. Across the designs we conclude that tiled, deferred photon gathering in a compute shader gives the best combination of performance and quality.
{"title":"Toward practical real-time photon mapping: efficient GPU density estimation","authors":"Michael Mara, D. Luebke, M. McGuire","doi":"10.1145/2448196.2448207","DOIUrl":"https://doi.org/10.1145/2448196.2448207","url":null,"abstract":"We describe the design space for real-time photon density estimation, the key step of rendering global illumination (GI) via photon mapping. We then detail and analyze efficient GPU implementations of four best-of-breed algorithms. All produce reasonable results on NVIDIA GeForce 670 at 1920 × 1080 for complex scenes with multiple-bounce diffuse effects, caustics, and glossy reflection in real-time. Across the designs we conclude that tiled, deferred photon gathering in a compute shader gives the best combination of performance and quality.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"30 1","pages":"71-78"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78540767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While global illumination is a crucial issue for most computer graphics applications fostering photo realistic rendering, fast and efficient implementations remain challenging for real-time applications. One approach to approximate indirect illumination is to distribute virtual point lights (VPL) at surfaces that emit indirect light. This distribution may be realized using reflective shadow maps (RSM). One major drawback of this approach is that each surface point has to be illuminated by thousands of VPLs, leading to a performance bottleneck in the shading step. Therefore several approaches trying to reduce the shading costs either by decreasing the number of VPLs or by lowering the surface points to be shaded exist. In our approach we propose a novel indirect shading approximation allowing us to reduce the number of surface points to be shaded to a minimum, still achieving a high image quality. Even complex and animated models can thus be represented by a few dozens of surface points for shading. Furthermore our approach allows graphic artists to intuitively tune the shading by adding or changing surface points without any pre-computations. The approach is very efficient and is completely implemented on the GPU not requiring any high shader profiles. This will even enable almost photo realistic rendering on upcoming handheld devices.
{"title":"Efficient shading of indirect illumination applying reflective shadow maps","authors":"P. Lensing, W. Broll","doi":"10.1145/2448196.2448211","DOIUrl":"https://doi.org/10.1145/2448196.2448211","url":null,"abstract":"While global illumination is a crucial issue for most computer graphics applications fostering photo realistic rendering, fast and efficient implementations remain challenging for real-time applications. One approach to approximate indirect illumination is to distribute virtual point lights (VPL) at surfaces that emit indirect light. This distribution may be realized using reflective shadow maps (RSM). One major drawback of this approach is that each surface point has to be illuminated by thousands of VPLs, leading to a performance bottleneck in the shading step. Therefore several approaches trying to reduce the shading costs either by decreasing the number of VPLs or by lowering the surface points to be shaded exist.\u0000 In our approach we propose a novel indirect shading approximation allowing us to reduce the number of surface points to be shaded to a minimum, still achieving a high image quality. Even complex and animated models can thus be represented by a few dozens of surface points for shading. Furthermore our approach allows graphic artists to intuitively tune the shading by adding or changing surface points without any pre-computations. The approach is very efficient and is completely implemented on the GPU not requiring any high shader profiles. This will even enable almost photo realistic rendering on upcoming handheld devices.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"152 1 1","pages":"95-102"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80399165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Volumetric phenomena are an integral part of standard rendering, yet, no suitable tools to edit characteristic properties are available so far. Either simulation results are used directly, or modifications are high-level, e.g., noise functions to influence appearance. Intuitive artistic control is not possible. We propose a solution to stylize single-scattering volumetric effects. Emission, scattering and extinction become amenable to artistic control while preserving a smooth and coherent appearance when changing the viewpoint. Our approach lets the user define a number of target views to be matched when observing the volume from this perspective. Via an analysis of the volumetric rendering equation, we can show how to link this problem to tomographic reconstruction.
{"title":"Volume stylizer: tomography-based volume painting","authors":"Oliver Klehm, Ivo Ihrke, H. Seidel, E. Eisemann","doi":"10.1145/2448196.2448222","DOIUrl":"https://doi.org/10.1145/2448196.2448222","url":null,"abstract":"Volumetric phenomena are an integral part of standard rendering, yet, no suitable tools to edit characteristic properties are available so far. Either simulation results are used directly, or modifications are high-level, e.g., noise functions to influence appearance. Intuitive artistic control is not possible.\u0000 We propose a solution to stylize single-scattering volumetric effects. Emission, scattering and extinction become amenable to artistic control while preserving a smooth and coherent appearance when changing the viewpoint. Our approach lets the user define a number of target views to be matched when observing the volume from this perspective. Via an analysis of the volumetric rendering equation, we can show how to link this problem to tomographic reconstruction.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"19 1","pages":"161-168"},"PeriodicalIF":0.0,"publicationDate":"2013-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82556504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}