Our everyday life brings us in contact with a rich range of materials that contribute to both the utility and aesthetics of our environment. Human beings are very good at using subtle distinctions in appearance to distinguish between materials (e.g., silk vs. cotton, laminate vs. granite). Capturing these visually important, yet subtle, distinctions is critical for applications in many domains: in virtual and augmented reality fueled by the advent of devices like Google Glass, in virtual prototyping for industrial design, in ecommerce and retail, in textile design and prototyping, in interior design and remodeling, and in games and movies. Understanding how humans perceive materials can drive better graphics and vision algorithms for material recognition and understanding, and material reproduction. As a first step towards achieving this goal, it is useful to collect information about the vast range of materials that we encounter in our daily lives. We introduce two new crowdsourced databases of material annotations to drive better material-driven exploration. OpenSurfaces is a rich, labeled database consisting of thousands of examples of surfaces segmented from consumer photographs of interiors, and annotated with material parameters, texture information, and contextual information. IIW (Intrinsic Images in theWild) is a database of pairwise material annotations of points in images that is useful for decomposing images in the wild into material and lighting layers. Together these databases can drive various material-based applications like surface retexturing, intrinsic image decomposition, intelligent material-based image browsing, and material design.
我们的日常生活使我们接触到丰富的材料,这些材料有助于我们环境的实用性和美学。人类非常善于利用外观上的细微差别来区分材料(例如,丝绸与棉花,层压板与花岗岩)。捕捉这些视觉上重要但微妙的区别对于许多领域的应用至关重要:在虚拟现实和增强现实(由谷歌眼镜等设备的出现推动)、工业设计的虚拟原型、电子商务和零售、纺织品设计和原型、室内设计和改造、游戏和电影中。了解人类如何感知材料可以推动更好的图形和视觉算法,用于材料识别和理解,以及材料复制。作为实现这一目标的第一步,收集我们在日常生活中遇到的大量材料的信息是有用的。我们引入了两个新的材料注释众包数据库,以更好地推动材料驱动的探索。OpenSurfaces是一个丰富的标记数据库,由数千个从消费者室内照片中分割出来的表面示例组成,并附有材料参数,纹理信息和上下文信息的注释。IIW (Intrinsic Images in the wild)是一个对图像中点的成对材料注释的数据库,用于将野外图像分解为材料层和照明层。这些数据库可以共同驱动各种基于材料的应用程序,如表面重构、内在图像分解、基于材料的智能图像浏览和材料设计。
{"title":"Modeling and representing materials in the wild","authors":"K. Bala","doi":"10.1145/2643188.2700379","DOIUrl":"https://doi.org/10.1145/2643188.2700379","url":null,"abstract":"Our everyday life brings us in contact with a rich range of materials that contribute to both the utility and aesthetics of our environment. Human beings are very good at using subtle distinctions in appearance to distinguish between materials (e.g., silk vs. cotton, laminate vs. granite). Capturing these visually important, yet subtle, distinctions is critical for applications in many domains: in virtual and augmented reality fueled by the advent of devices like Google Glass, in virtual prototyping for industrial design, in ecommerce and retail, in textile design and prototyping, in interior design and remodeling, and in games and movies. Understanding how humans perceive materials can drive better graphics and vision algorithms for material recognition and understanding, and material reproduction. As a first step towards achieving this goal, it is useful to collect information about the vast range of materials that we encounter in our daily lives. We introduce two new crowdsourced databases of material annotations to drive better material-driven exploration. OpenSurfaces is a rich, labeled database consisting of thousands of examples of surfaces segmented from consumer photographs of interiors, and annotated with material parameters, texture information, and contextual information. IIW (Intrinsic Images in theWild) is a database of pairwise material annotations of points in images that is useful for decomposing images in the wild into material and lighting layers. Together these databases can drive various material-based applications like surface retexturing, intrinsic image decomposition, intelligent material-based image browsing, and material design.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115602726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational photography emerged as a multidisciplinary field at the intersection of optics, computer vision, and computer graphics, with the objective of acquiring richer representations of a scene than those that conventional cameras can capture. The basic idea is to somehow code the information before it reaches the sensor, so that a posterior decoding will yield the final image (or video, light field, focal stack, etc). We describe here two examples of computational photography. One deals with coded apertures for the problem of defocus deblurring, and is a classical example of this coding-decoding scheme. The other is an ultrafast imaging system, the first to be able to capture light propagation in macroscopic high resolution scenes at 0.5 trillion frames per second.
{"title":"Computational photography: coding in space and time","authors":"B. Masiá","doi":"10.1145/2643188.2700583","DOIUrl":"https://doi.org/10.1145/2643188.2700583","url":null,"abstract":"Computational photography emerged as a multidisciplinary field at the intersection of optics, computer vision, and computer graphics, with the objective of acquiring richer representations of a scene than those that conventional cameras can capture. The basic idea is to somehow code the information before it reaches the sensor, so that a posterior decoding will yield the final image (or video, light field, focal stack, etc). We describe here two examples of computational photography. One deals with coded apertures for the problem of defocus deblurring, and is a classical example of this coding-decoding scheme. The other is an ultrafast imaging system, the first to be able to capture light propagation in macroscopic high resolution scenes at 0.5 trillion frames per second.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"390 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127590942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a performance comparison of bounding volume hierarchies and kd-trees for ray tracing on many-core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for maximum performance of tracing rays irrespective of the time needed for their build. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd-trees for simple and moderately complex scenes. Kd-trees, on the other hand, have higher performance for complex scenes, in particular for those with occlusion.
{"title":"Bounding volume hierarchies versus kd-trees on contemporary many-core architectures","authors":"Marek Vinkler, V. Havran, Jiří Bittner","doi":"10.1145/2643188.2643196","DOIUrl":"https://doi.org/10.1145/2643188.2643196","url":null,"abstract":"We present a performance comparison of bounding volume hierarchies and kd-trees for ray tracing on many-core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for maximum performance of tracing rays irrespective of the time needed for their build. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd-trees for simple and moderately complex scenes. Kd-trees, on the other hand, have higher performance for complex scenes, in particular for those with occlusion.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122397193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a real-time skinning technique for character animation based on a two-layered deformation model. For each frame, the skin of a generic character is first deformed by using a classic linear blend skinning approach, then the vertex positions are adjusted according to a Position Based Dynamics schema. We define geometric constraints which mimic the flesh behavior and produce interesting effects like volume conservation and secondary animations, in particular passive jiggling behavior, without relying on a predefined training set of poses. Once the whole model is defined, the character animation is synthesized in real-time without suffering of the inherent artefacts of classic interactive skinning techniques, such as the "candy-wrapper" effect or undesired skin bulging.
{"title":"Position based skinning of skeleton-driven deformable characters","authors":"Nadine Abu Rumman, M. Fratarcangeli","doi":"10.1145/2643188.2643194","DOIUrl":"https://doi.org/10.1145/2643188.2643194","url":null,"abstract":"This paper presents a real-time skinning technique for character animation based on a two-layered deformation model. For each frame, the skin of a generic character is first deformed by using a classic linear blend skinning approach, then the vertex positions are adjusted according to a Position Based Dynamics schema. We define geometric constraints which mimic the flesh behavior and produce interesting effects like volume conservation and secondary animations, in particular passive jiggling behavior, without relying on a predefined training set of poses. Once the whole model is defined, the character animation is synthesized in real-time without suffering of the inherent artefacts of classic interactive skinning techniques, such as the \"candy-wrapper\" effect or undesired skin bulging.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125035420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A hybrid color model is a color descriptor formed by combining different channels from several other color models. In computer graphics applications such models are rarely used due to redundancy. However, hybrid color models may be of interest for Content-Based Image Retrieval (CBIR). Best features of each color model can be combined to obtain optimum retrieval performance. In this paper, a novel algorithm is proposed for selection of channels for a hybrid color model used in construction of a fuzzy color histogram. This algorithm is elaborated and implemented for use with several common reference datasets consisting of photographs of natural scenes. Result of this experimental procedure is a new hybrid color model named HSY. Using standard datasets and a standard metric for retrieval performance (ANMRR), it is shown that this new model can give an improved retrieval performance. In addition, this model is of interest for use in JPEG compressed domain due to simpler calculation.
{"title":"Hybrid color model for image retrieval based on fuzzy histograms","authors":"Vedran Ljubovic, H. Supic","doi":"10.1145/2643188.2643198","DOIUrl":"https://doi.org/10.1145/2643188.2643198","url":null,"abstract":"A hybrid color model is a color descriptor formed by combining different channels from several other color models. In computer graphics applications such models are rarely used due to redundancy. However, hybrid color models may be of interest for Content-Based Image Retrieval (CBIR). Best features of each color model can be combined to obtain optimum retrieval performance. In this paper, a novel algorithm is proposed for selection of channels for a hybrid color model used in construction of a fuzzy color histogram. This algorithm is elaborated and implemented for use with several common reference datasets consisting of photographs of natural scenes. Result of this experimental procedure is a new hybrid color model named HSY. Using standard datasets and a standard metric for retrieval performance (ANMRR), it is shown that this new model can give an improved retrieval performance. In addition, this model is of interest for use in JPEG compressed domain due to simpler calculation.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131594481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Charpenay, Bernhard Steiner, Przemyslaw Musialski
Gabor noise is a powerful technique for procedural texture generation. Contrary to other types of procedural noise, its sparse convolution aspect makes it easily controllable locally. In this paper, we demonstrate this property by explicitly introducing spatial variations. We do so by linking the sparse convolution process to the parameterization of the underlying surface. Using this approach, it is possible to provide control maps for the parameters in a natural and convenient way. In order to derive intuitive control of the resulting textures, we accomplish a small study of the influence of the parameters of the Gabor kernel with respect to the outcome and we introduce a solution where we bind values such as the frequency or the orientation of the Gabor kernel to a user-provided control map in order to produce novel visual effects.
{"title":"Sampling Gabor noise in the spatial domain","authors":"Victor Charpenay, Bernhard Steiner, Przemyslaw Musialski","doi":"10.1145/2643188.2643193","DOIUrl":"https://doi.org/10.1145/2643188.2643193","url":null,"abstract":"Gabor noise is a powerful technique for procedural texture generation. Contrary to other types of procedural noise, its sparse convolution aspect makes it easily controllable locally. In this paper, we demonstrate this property by explicitly introducing spatial variations. We do so by linking the sparse convolution process to the parameterization of the underlying surface. Using this approach, it is possible to provide control maps for the parameters in a natural and convenient way. In order to derive intuitive control of the resulting textures, we accomplish a small study of the influence of the parameters of the Gabor kernel with respect to the outcome and we introduce a solution where we bind values such as the frequency or the orientation of the Gabor kernel to a user-provided control map in order to produce novel visual effects.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129454601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Welcome to the 30th Spring Conference on Computer Graphics! This conference ("probably the oldest regular annual meeting of computer graphics in Central Europe") follows a long tradition of papers in all areas related to computer graphics, with topics ranging from rendering, to computational geometry or animation. I am excited to be the chair this year, and I'm looking forward to the presentations! As in previous years, the Central European Seminar on Computer Graphics (CESCG) is co-located with SCCG and serves the important function of encouraging young people in the field.
{"title":"Proceedings of the 30th Spring Conference on Computer Graphics","authors":"D. Gutierrez","doi":"10.1145/2643188","DOIUrl":"https://doi.org/10.1145/2643188","url":null,"abstract":"Welcome to the 30th Spring Conference on Computer Graphics! This conference (\"probably the oldest regular annual meeting of computer graphics in Central Europe\") follows a long tradition of papers in all areas related to computer graphics, with topics ranging from rendering, to computational geometry or animation. I am excited to be the chair this year, and I'm looking forward to the presentations! As in previous years, the Central European Seminar on Computer Graphics (CESCG) is co-located with SCCG and serves the important function of encouraging young people in the field.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115453621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we analyze different layout algorithms that preserve relative directions in geo-referenced networks. This is an important criterion for many sensor networks such as the electric grid and other supply networks, because it enables the user to match the geographic setting with the drawing on the screen. Even today, the layout of these networks are often created manually. This is due to the requirement that these layouts must respect geographic references but should still be easy to read and understand. The range of available automatic algorithms spans from general graph layouts over schematic maps to semi-realistic drawings. At first sight, schematics seem to be a promising compromise between geographic correctness and readability. The former property exploits the mental map of the user while the latter makes it easier for the user to learn about the network structure. We investigate different algorithms for such maps together with different visualization techniques. In particular, the group of octi-linear layouts is prominent in handcrafted subway maps. These algorithms have been used extensively to generate drawings for subway maps. Also known as Metro Map layouts, only horizontal, vertical and diagonal directions are allowed. This increases flexibility and makes the resulting layout look similar to the well-known subway maps of large cities. The key difference to general graph layout algorithms is that geographic relations are respected in terms of relative directions. However, it is not clear, whether this metaphor can be transferred from metro maps to other domains. We discuss applicability of these different approaches for geo-based networks in general with the electric grid as a use-case scenario.
{"title":"A survey of direction-preserving layout strategies","authors":"M. Steiger, J. Bernard, T. May, J. Kohlhammer","doi":"10.1145/2643188.2643189","DOIUrl":"https://doi.org/10.1145/2643188.2643189","url":null,"abstract":"In this paper we analyze different layout algorithms that preserve relative directions in geo-referenced networks. This is an important criterion for many sensor networks such as the electric grid and other supply networks, because it enables the user to match the geographic setting with the drawing on the screen. Even today, the layout of these networks are often created manually. This is due to the requirement that these layouts must respect geographic references but should still be easy to read and understand. The range of available automatic algorithms spans from general graph layouts over schematic maps to semi-realistic drawings. At first sight, schematics seem to be a promising compromise between geographic correctness and readability. The former property exploits the mental map of the user while the latter makes it easier for the user to learn about the network structure. We investigate different algorithms for such maps together with different visualization techniques. In particular, the group of octi-linear layouts is prominent in handcrafted subway maps. These algorithms have been used extensively to generate drawings for subway maps. Also known as Metro Map layouts, only horizontal, vertical and diagonal directions are allowed. This increases flexibility and makes the resulting layout look similar to the well-known subway maps of large cities. The key difference to general graph layout algorithms is that geographic relations are respected in terms of relative directions. However, it is not clear, whether this metaphor can be transferred from metro maps to other domains. We discuss applicability of these different approaches for geo-based networks in general with the electric grid as a use-case scenario.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133631188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Madaras, Michal Piovarči, J. Dadová, Roman Franta, Tomás Kovacovský
In this paper we present a new algorithm for establishing correspondence between objects based on matching of extracted skeletons. First, a point cloud of an input model is scanned. Second, a skeleton is extracted from the scanned point cloud. In the last step, all the extracted skeletons are matched based on valence of vertices and segment lengths. The matching process yields into two direct applications - topological mapping and segment mapping. Topological mapping can be used for detection of joint positions from multiple scans of articulated figures in different poses. Segment mapping can be used for animation transfer and for transferring of arbitrary surface per-vertex properties. Our approach is unique, because it is based on matching of extracted skeletons only and does not require vertex correspondence.
{"title":"Skeleton-based matching for animation transfer and joint detection","authors":"Martin Madaras, Michal Piovarči, J. Dadová, Roman Franta, Tomás Kovacovský","doi":"10.1145/2643188.2643197","DOIUrl":"https://doi.org/10.1145/2643188.2643197","url":null,"abstract":"In this paper we present a new algorithm for establishing correspondence between objects based on matching of extracted skeletons. First, a point cloud of an input model is scanned. Second, a skeleton is extracted from the scanned point cloud. In the last step, all the extracted skeletons are matched based on valence of vertices and segment lengths. The matching process yields into two direct applications - topological mapping and segment mapping. Topological mapping can be used for detection of joint positions from multiple scans of articulated figures in different poses. Segment mapping can be used for animation transfer and for transferring of arbitrary surface per-vertex properties. Our approach is unique, because it is based on matching of extracted skeletons only and does not require vertex correspondence.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131769804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose new methods for building geological illustrations and animations. We focus on allowing geologists to create their subsurface models by means of sketches, to quickly communicate concepts and ideas rather than detailed information. The result of our sketch-based modelling approach is a layer-cake volume representing geological phenomena, where each layer is rock material which has accumulated due to a user-defined depositional event. Internal geological structures can be inspected by different visualization techniques that we employ. Faulting and compaction of rock layers are important processes in geology. They can be modelled and visualized with our technique. Our representation supports non-planar faults that a user may define by means of sketches. Real-time illustrative animations are achieved by our GPU accelerated approach.
{"title":"Rapid modelling of interactive geological illustrations with faults and compaction","authors":"Mattia Natali, J. Parulek, Daniel Patel","doi":"10.1145/2643188.2643201","DOIUrl":"https://doi.org/10.1145/2643188.2643201","url":null,"abstract":"In this paper, we propose new methods for building geological illustrations and animations. We focus on allowing geologists to create their subsurface models by means of sketches, to quickly communicate concepts and ideas rather than detailed information. The result of our sketch-based modelling approach is a layer-cake volume representing geological phenomena, where each layer is rock material which has accumulated due to a user-defined depositional event. Internal geological structures can be inspected by different visualization techniques that we employ. Faulting and compaction of rock layers are important processes in geology. They can be modelled and visualized with our technique. Our representation supports non-planar faults that a user may define by means of sketches. Real-time illustrative animations are achieved by our GPU accelerated approach.","PeriodicalId":115384,"journal":{"name":"Proceedings of the 30th Spring Conference on Computer Graphics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128446493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}