GPU-based ray casting, as introduced by Krüger and Westermann [2003], is an effective method for volumetric rendering. Unfortunately, conventional methods of Empty Space Skipping (ESS) using spatial partitioning, which accelerate ray casting by culling ray-surface intersection tests in empty parts of the volume, do not align well with GPU architectures. CPUs are usually required for tree generation and parsing, as well as the data transfer from CPU to GPU. Such CPU-based pre-processing is time-consuming, with the result that spatial tree structures are invariably applied to static datasets.
{"title":"Interactive GPU-based octree generation and traversal","authors":"Chen Wei, J. Gain, P. Marais","doi":"10.1145/2159616.2159657","DOIUrl":"https://doi.org/10.1145/2159616.2159657","url":null,"abstract":"GPU-based ray casting, as introduced by Krüger and Westermann [2003], is an effective method for volumetric rendering. Unfortunately, conventional methods of Empty Space Skipping (ESS) using spatial partitioning, which accelerate ray casting by culling ray-surface intersection tests in empty parts of the volume, do not align well with GPU architectures. CPUs are usually required for tree generation and parsing, as well as the data transfer from CPU to GPU. Such CPU-based pre-processing is time-consuming, with the result that spatial tree structures are invariably applied to static datasets.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"26 1","pages":"211"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87518736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The storage requirements for rendering with arbitrary tabular BRDFs can be quite large. This limits the number of BRDFs that can be in used in a scene to only a few. Furthermore, material parameters can be too complex to store and render per-pixel.
{"title":"Linear compression for spatially-varying BRDFs","authors":"S. Braeger, C. Hughes","doi":"10.1145/2159616.2159658","DOIUrl":"https://doi.org/10.1145/2159616.2159658","url":null,"abstract":"The storage requirements for rendering with arbitrary tabular BRDFs can be quite large. This limits the number of BRDFs that can be in used in a scene to only a few. Furthermore, material parameters can be too complex to store and render per-pixel.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"29 1","pages":"212"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91328031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical simulation is now a robust and common approach to recreating reality in virtual worlds and is almost universally used in the animation of natural phenomena, ballistic objects, and character accessories like clothing and hair. Despite these great strides, the animation of primary characters continues to be dominated by the kinematic techniques of motion capture and above all traditional keyframing. Two aspects of a primary character in particular, skeletal and facial motion, are often laboriously animated using kinematics. There are perhaps three chief reasons for this. First, kinematics, unencumbered by physics, provides the finest level of control necessary for animators to breathe life and personality into their characters. Second, this control is direct and history-free, in that the authored state of the character, set at any point in time is precisely observed upon playback and its impact on the animation is localized to a neighborhood around that time. Third, animator interaction with the time line is WYSIWYG (what-you-see-is-what-you-get), allowing them to scrub to various points in time and observe the character state without having to playback the entire animation. Secondary dynamics can be overlaid on primarily kinematic character motion to enhance the visceral feel of their characters. But unfortunately compromise the second and third reasons animators rely on pure kinematic control.
{"title":"Editing and constraining kinematic approximations of dynamic motion","authors":"Cyrus Rahgoshay, A. Rabbani, Karan Singh, P. Kry","doi":"10.1145/2159616.2159652","DOIUrl":"https://doi.org/10.1145/2159616.2159652","url":null,"abstract":"Physical simulation is now a robust and common approach to recreating reality in virtual worlds and is almost universally used in the animation of natural phenomena, ballistic objects, and character accessories like clothing and hair. Despite these great strides, the animation of primary characters continues to be dominated by the kinematic techniques of motion capture and above all traditional keyframing. Two aspects of a primary character in particular, skeletal and facial motion, are often laboriously animated using kinematics. There are perhaps three chief reasons for this. First, kinematics, unencumbered by physics, provides the finest level of control necessary for animators to breathe life and personality into their characters. Second, this control is direct and history-free, in that the authored state of the character, set at any point in time is precisely observed upon playback and its impact on the animation is localized to a neighborhood around that time. Third, animator interaction with the time line is WYSIWYG (what-you-see-is-what-you-get), allowing them to scrub to various points in time and observe the character state without having to playback the entire animation. Secondary dynamics can be overlaid on primarily kinematic character motion to enhance the visceral feel of their characters. But unfortunately compromise the second and third reasons animators rely on pure kinematic control.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"206"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83071453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Schäfer, Magdalena Prus, Quirin Meyer, J. Süßmuth, M. Stamminger
We present a novel representation for storing sub-triangle signals, such as colors, normals, or displacements directly with the triangle mesh. Signal samples are stored as guided by hardware-tessellation patterns. Thus, we can directly render from our representation by assigning signal samples to attributes of vertices generated by the hardware tessellator. Contrary to texture mapping, our approach does not require any atlas generation, chartification, or uv-unwrapping. Thus, it does not suffer from texture-related artifacts, such as discontinuities across chart boundaries or distortion. Moreover, our approach allows specifying the optimal sampling rate adaptively on a per triangle basis, resulting in significant memory savings for most signal types. We propose a signal optimal approach for converting arbitrary signals, including existing assets with textures or mesh colors, into our representation. Further, we provide efficient algorithms for mip-mapping, bi- and tri-linear interpolation directly in our representation. Our approach is optimally suited for displacement mapping: it automatically generates crack-free, view-dependent displacement mapped models enabling continuous level-of-detail.
{"title":"Multiresolution attributes for tessellated meshes","authors":"Henry Schäfer, Magdalena Prus, Quirin Meyer, J. Süßmuth, M. Stamminger","doi":"10.1145/2159616.2159645","DOIUrl":"https://doi.org/10.1145/2159616.2159645","url":null,"abstract":"We present a novel representation for storing sub-triangle signals, such as colors, normals, or displacements directly with the triangle mesh. Signal samples are stored as guided by hardware-tessellation patterns. Thus, we can directly render from our representation by assigning signal samples to attributes of vertices generated by the hardware tessellator.\u0000 Contrary to texture mapping, our approach does not require any atlas generation, chartification, or uv-unwrapping. Thus, it does not suffer from texture-related artifacts, such as discontinuities across chart boundaries or distortion. Moreover, our approach allows specifying the optimal sampling rate adaptively on a per triangle basis, resulting in significant memory savings for most signal types.\u0000 We propose a signal optimal approach for converting arbitrary signals, including existing assets with textures or mesh colors, into our representation. Further, we provide efficient algorithms for mip-mapping, bi- and tri-linear interpolation directly in our representation. Our approach is optimally suited for displacement mapping: it automatically generates crack-free, view-dependent displacement mapped models enabling continuous level-of-detail.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"142 1","pages":"175-182"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78927160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster, we present a volume rendering framework that achieves realtime rendering of global illumination effects for volume datasets, such as multiple scattering and volume shadow. This approach incorporates the volumetric photon mapping technique [Jensen and Christensen 1998] into the classical precomputed radiance transfer [Sloan et al. 2002] pipeline. Fig.1 shows that our method is successfully applied in both interactive graphics and scientific visualization applications.
在这张海报中,我们提出了一个体渲染框架,它可以实现体数据集的全局照明效果的实时渲染,例如多次散射和体阴影。该方法将体积光子映射技术[Jensen and Christensen 1998]整合到经典的预计算辐射传输[Sloan et al. 2002]管道中。从图1可以看出,我们的方法成功地应用于交互式图形和科学可视化应用中。
{"title":"Realtime volume rendering using precomputed photon mapping","authors":"Yubo Zhang, Z. Dong, K. Ma","doi":"10.1145/2159616.2159663","DOIUrl":"https://doi.org/10.1145/2159616.2159663","url":null,"abstract":"In this poster, we present a volume rendering framework that achieves realtime rendering of global illumination effects for volume datasets, such as multiple scattering and volume shadow. This approach incorporates the volumetric photon mapping technique [Jensen and Christensen 1998] into the classical precomputed radiance transfer [Sloan et al. 2002] pipeline. Fig.1 shows that our method is successfully applied in both interactive graphics and scientific visualization applications.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"46 1","pages":"217"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74018410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. McGuire, P. Hennessy, Michał Bukowski, Brian Osman
This paper describes a novel filter for simulating motion blur phenomena in real time by applying ideas from offline stochastic reconstruction. The filter operates as a 2D post-process on a conventional framebuffer augmented with a screen-space velocity buffer. We demonstrate results on video game scenes rendered and reconstructed in real-time on NVIDIA GeForce 480 and Xbox 360 platforms, and show that the same filter can be applied to cinematic post-processing of offline-rendered images and real photographs. The technique is fast and robust enough that we deployed it in a production game engine used at Vicarious Visions.
{"title":"A reconstruction filter for plausible motion blur","authors":"M. McGuire, P. Hennessy, Michał Bukowski, Brian Osman","doi":"10.1145/2159616.2159639","DOIUrl":"https://doi.org/10.1145/2159616.2159639","url":null,"abstract":"This paper describes a novel filter for simulating motion blur phenomena in real time by applying ideas from offline stochastic reconstruction. The filter operates as a 2D post-process on a conventional framebuffer augmented with a screen-space velocity buffer. We demonstrate results on video game scenes rendered and reconstructed in real-time on NVIDIA GeForce 480 and Xbox 360 platforms, and show that the same filter can be applied to cinematic post-processing of offline-rendered images and real photographs. The technique is fast and robust enough that we deployed it in a production game engine used at Vicarious Visions.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"2011 1","pages":"135-142"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86351135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Gribble, J. Fisher, Daniel Eby, E. Quigley, Gideon Ludwig
The Ray Tracing Visualization Toolkit (rtVTK) is a collection of programming and visualization tools supporting visual analysis of ray-based rendering algorithms. rtVTK leverages layered visualization within the spatial domain of computation, enabling investigators to explore the computational elements of any ray-based renderer. Renderers utilize a library for recording and processing ray state, and a configurable pipeline of loosely coupled components allows run-time control of the resulting visualization. rtVTK enhances tasks in development, education, and analysis by enabling users to interact with a visual representation of ray tracing computations.
Ray Tracing Visualization Toolkit (rtVTK)是一个支持基于光线渲染算法的可视化分析的编程和可视化工具的集合。rtVTK利用计算空间域内的分层可视化,使研究人员能够探索任何基于光线的渲染器的计算元素。渲染器利用一个库来记录和处理光线状态,一个由松散耦合组件组成的可配置管道允许对结果可视化进行运行时控制。rtVTK通过使用户能够与光线追踪计算的可视化表示进行交互,从而增强了开发、教育和分析中的任务。
{"title":"Ray tracing visualization toolkit","authors":"C. Gribble, J. Fisher, Daniel Eby, E. Quigley, Gideon Ludwig","doi":"10.1145/2159616.2159628","DOIUrl":"https://doi.org/10.1145/2159616.2159628","url":null,"abstract":"The Ray Tracing Visualization Toolkit (rtVTK) is a collection of programming and visualization tools supporting visual analysis of ray-based rendering algorithms. rtVTK leverages layered visualization within the spatial domain of computation, enabling investigators to explore the computational elements of any ray-based renderer. Renderers utilize a library for recording and processing ray state, and a configurable pipeline of loosely coupled components allows run-time control of the resulting visualization. rtVTK enhances tasks in development, education, and analysis by enabling users to interact with a visual representation of ray tracing computations.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"1 1","pages":"71-78"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91345193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuan Yu, Jason C. Yang, J. Hensley, T. Harada, Jingyi Yu
The appearance of hair plays a critical role in synthesizing realistic looking human characters. However, due to the high complexity in hair geometry and the scattering nature of hair fibers, rendering hair with photorealistic quality and at interactive speeds remains as an open problem in computer graphics. Previous approaches attempt to simplify the scattering model to only tackle a specific aspect of the scattering effects. In this paper, we present a new approach to simultaneously render complex scattering effects including volumetric shadows, transparency, and antialiasing under a unified framework. Our solution uses a shadow-ray path to produce volumetric self-shadows and an additional view-ray path to produce transparency. To compute and accumulate the contribution of individual hair fibers along each (shadow or view) path, we develop a new GPU-based k-buffer technique that can efficiently locate the K nearest scattering locations and combine them in the correct order. Compared with existing multi-layer based approaches[Kim and Neumann 2001; Yuksel and Keyser 2008; Sintorn and Assarsson 2009], we show that our k-buffer solution can more accurately reproduce the shadowing and transparency effects. Further, we present an anti-aliasing scheme that directly builds upon the k-buffer. We implement all three effects (volumetric shadows, transparency, and anti-aliasing) under a unified rendering pipeline. Experiments on complex hair models demonstrate that our new solution produces near photorealistic hair rendering at very interactive speed.
头发的外观在合成逼真的人类角色中起着至关重要的作用。然而,由于头发几何结构的高度复杂性和头发纤维的散射特性,在计算机图形学中,以逼真的质量和交互速度渲染头发仍然是一个悬而未决的问题。以前的方法试图简化散射模型,只处理散射效应的一个特定方面。在本文中,我们提出了一种在统一框架下同时渲染复杂散射效果的新方法,包括体积阴影、透明度和抗混叠。我们的解决方案使用阴影-光线路径来产生体积自阴影,并使用额外的视图-光线路径来产生透明度。为了计算和累积每个(阴影或视图)路径上单个头发纤维的贡献,我们开发了一种新的基于gpu的K -buffer技术,该技术可以有效地定位K个最近的散射位置,并以正确的顺序将它们组合起来。与现有的基于多层的方法相比[Kim and Neumann 2001;Yuksel and Keyser 2008;Sintorn和Assarsson 2009],我们表明我们的k缓冲溶液可以更准确地再现阴影和透明度效果。此外,我们提出了一种直接建立在k-buffer上的抗混叠方案。我们在一个统一的渲染管道下实现所有三种效果(体积阴影,透明度和抗锯齿)。在复杂头发模型上的实验表明,我们的新解决方案以非常高的交互速度产生接近照片的头发渲染。
{"title":"A framework for rendering complex scattering effects on hair","authors":"Xuan Yu, Jason C. Yang, J. Hensley, T. Harada, Jingyi Yu","doi":"10.1145/2159616.2159635","DOIUrl":"https://doi.org/10.1145/2159616.2159635","url":null,"abstract":"The appearance of hair plays a critical role in synthesizing realistic looking human characters. However, due to the high complexity in hair geometry and the scattering nature of hair fibers, rendering hair with photorealistic quality and at interactive speeds remains as an open problem in computer graphics. Previous approaches attempt to simplify the scattering model to only tackle a specific aspect of the scattering effects. In this paper, we present a new approach to simultaneously render complex scattering effects including volumetric shadows, transparency, and antialiasing under a unified framework. Our solution uses a shadow-ray path to produce volumetric self-shadows and an additional view-ray path to produce transparency. To compute and accumulate the contribution of individual hair fibers along each (shadow or view) path, we develop a new GPU-based k-buffer technique that can efficiently locate the K nearest scattering locations and combine them in the correct order. Compared with existing multi-layer based approaches[Kim and Neumann 2001; Yuksel and Keyser 2008; Sintorn and Assarsson 2009], we show that our k-buffer solution can more accurately reproduce the shadowing and transparency effects. Further, we present an anti-aliasing scheme that directly builds upon the k-buffer. We implement all three effects (volumetric shadows, transparency, and anti-aliasing) under a unified rendering pipeline. Experiments on complex hair models demonstrate that our new solution produces near photorealistic hair rendering at very interactive speed.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"114 1","pages":"111-118"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89923743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Casas, M. Tejera, Jean-Yves Guillemaut, A. Hilton
A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced which combines the realistic deformation of previous non-linear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real-time based on surface shape and motion similarity. 4D parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.
{"title":"4D parametric motion graphs for interactive animation","authors":"D. Casas, M. Tejera, Jean-Yves Guillemaut, A. Hilton","doi":"10.1145/2159616.2159633","DOIUrl":"https://doi.org/10.1145/2159616.2159633","url":null,"abstract":"A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced which combines the realistic deformation of previous non-linear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real-time based on surface shape and motion similarity. 4D parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"28 1","pages":"103-110"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73394907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Life on Earth has many forms and every life form has its own version of reality, as reflected in the eyes of the viewer. These worlds are as real as the one that we know and all of them are equally fascinating. The multiverse of such "animal realities" can be explored in Virtual Reality, as described in this concept work.
{"title":"Animal reality","authors":"A. Sherstyuk","doi":"10.1145/2159616.2159651","DOIUrl":"https://doi.org/10.1145/2159616.2159651","url":null,"abstract":"Life on Earth has many forms and every life form has its own version of reality, as reflected in the eyes of the viewer. These worlds are as real as the one that we know and all of them are equally fascinating. The multiverse of such \"animal realities\" can be explored in Virtual Reality, as described in this concept work.","PeriodicalId":91160,"journal":{"name":"Proceedings. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games","volume":"25 1","pages":"205"},"PeriodicalIF":0.0,"publicationDate":"2012-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86563330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}