Appearance modeling is a difficult problem that still receives considerable attention from the graphics and vision communities. Though recent years have brought a growing number of high-quality material databases that have sparked new research, there is a general lack of evaluation benchmarks for performance assessment and fair comparisons between competing works. We therefore release a new dataset and pose a public challenge that will enable standardized evaluations. For this we measured 56 fabric samples with a commercial appearance scanner. We publish the resulting calibrated HDR images, along with baseline SVBRDF fits. The challenge is to recreate, under known light and view sampling, the appearance of a subset of unseen images. User submissions will be automatically evaluated and ranked by a set of standard image metrics. CCS Concepts • Computing methodologies → Reflectance modeling; Appearance and texture representations;
{"title":"Fabric Appearance Benchmark","authors":"S. Merzbach, R. Klein","doi":"10.2312/egp.20201035","DOIUrl":"https://doi.org/10.2312/egp.20201035","url":null,"abstract":"Appearance modeling is a difficult problem that still receives considerable attention from the graphics and vision communities. Though recent years have brought a growing number of high-quality material databases that have sparked new research, there is a general lack of evaluation benchmarks for performance assessment and fair comparisons between competing works. We therefore release a new dataset and pose a public challenge that will enable standardized evaluations. For this we measured 56 fabric samples with a commercial appearance scanner. We publish the resulting calibrated HDR images, along with baseline SVBRDF fits. The challenge is to recreate, under known light and view sampling, the appearance of a subset of unseen images. User submissions will be automatically evaluated and ranked by a set of standard image metrics. CCS Concepts • Computing methodologies → Reflectance modeling; Appearance and texture representations;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"118 1","pages":"3-4"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89398061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Zell, Katja Zibrek, Xueni Pan, M. Gillies, R. Mcdonnell
This course will introduce students, researchers and digital artists to the recent results in perceptual research on virtual characters. It covers how technical and artistic aspects that constitute the appearance of a virtual character influence human perception, and how to create a plausibility illusion in interactive scenarios with virtual characters. We will report results of studies that addressed the influence of low-level cues like facial proportions, shading or level of detail and higher-level cues such as behavior or artistic stylization. We will place emphasis on aspects that are encountered during character development, animation, interaction design and achieving consistency between the visuals and storytelling. We will close with the relationship between verbal and non-verbal interaction and introduce some concepts which are important for creating convincing character behavior in virtual reality. The insights that we present in this course will serve as an additional toolset to anticipate the effect of certain design decisions and to create more convincing characters, especially in the case where budgets or time are limited. 1. Course Description Virtual humans are finding a growing number of applications, such as in social media apps, Spaces by Facebook, Bitmoji and Genies, as well as computer games and human-computer interfaces. Their use today has also extended from the typical on-screen display applications to immersive and collaborative environments (VR/AR/MR). At the same time, we are also witnessing significant improvements in real-time performance, increased visual fidelity of characters and novel devices. The question of how these developments will be received from the user’s point of view, or which aspects of virtual characters influence the user more, has therefore never been so important. This course will provide an overview of existing perceptual studies related to the topic of virtual characters. To make the course easier to follow, we start with a brief overview of human perception and how perceptual studies are conducted in terms of methods and experiment design. With knowledge of the methods, we continue with artistic and technical aspects which influence the design of character appearance (lighting and shading, facial feature placement, stylization, etc.). Important questions on character design will be addressed such as – if I want my character to be highly appealing, should I render with realistic or stylized shading? What facial features make my character appear more trustworthy? Do dark shadows enhance the emotion my character is portraying? We then dive deeper into the movement of the characters, exploring which information is present in the motion cues and how motion can, in combination with character appearance, guide our perception and even be a foundation of biased perception (stereotypes). Some examples of questions that we will address are – if I want my character to appear extroverted, what movement or app
{"title":"From Perception to Interaction with Virtual Characters","authors":"E. Zell, Katja Zibrek, Xueni Pan, M. Gillies, R. Mcdonnell","doi":"10.2312/egt.20201001","DOIUrl":"https://doi.org/10.2312/egt.20201001","url":null,"abstract":"This course will introduce students, researchers and digital artists to the recent results in perceptual research on virtual characters. It covers how technical and artistic aspects that constitute the appearance of a virtual character influence human perception, and how to create a plausibility illusion in interactive scenarios with virtual characters. We will report results of studies that addressed the influence of low-level cues like facial proportions, shading or level of detail and higher-level cues such as behavior or artistic stylization. We will place emphasis on aspects that are encountered during character development, animation, interaction design and achieving consistency between the visuals and storytelling. We will close with the relationship between verbal and non-verbal interaction and introduce some concepts which are important for creating convincing character behavior in virtual reality. The insights that we present in this course will serve as an additional toolset to anticipate the effect of certain design decisions and to create more convincing characters, especially in the case where budgets or time are limited. 1. Course Description Virtual humans are finding a growing number of applications, such as in social media apps, Spaces by Facebook, Bitmoji and Genies, as well as computer games and human-computer interfaces. Their use today has also extended from the typical on-screen display applications to immersive and collaborative environments (VR/AR/MR). At the same time, we are also witnessing significant improvements in real-time performance, increased visual fidelity of characters and novel devices. The question of how these developments will be received from the user’s point of view, or which aspects of virtual characters influence the user more, has therefore never been so important. This course will provide an overview of existing perceptual studies related to the topic of virtual characters. To make the course easier to follow, we start with a brief overview of human perception and how perceptual studies are conducted in terms of methods and experiment design. With knowledge of the methods, we continue with artistic and technical aspects which influence the design of character appearance (lighting and shading, facial feature placement, stylization, etc.). Important questions on character design will be addressed such as – if I want my character to be highly appealing, should I render with realistic or stylized shading? What facial features make my character appear more trustworthy? Do dark shadows enhance the emotion my character is portraying? We then dive deeper into the movement of the characters, exploring which information is present in the motion cues and how motion can, in combination with character appearance, guide our perception and even be a foundation of biased perception (stereotypes). Some examples of questions that we will address are – if I want my character to appear extroverted, what movement or app","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"26 1","pages":"5-31"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86867025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel noise model to procedurally generate volumetric terrain on implicit surfaces. The main idea is to combine a novel Locally Controlled 3D Spot noise (LCSN) for authoring the macro structures and 3D Gabor noise to add micro details. More specifically, a spatially-defined kernel formulation in combination with an impulse distribution enables the LCSN to generate arbitrary size craters and boulders, while the Gabor noise generates stochastic Gaussian details. The corresponding metaball positions in the underlying implicit surface preserve locality to avoid the globality of traditional procedural noise textures, which yields an essential feature that is often missing in procedural texture based terrain generators. Furthermore, different noise-based primitives are integrated through operators, i.e. blending, replacing, or warping into the complex volumetric terrain. The result is a completely implicit representation and, as such, has the advantage of compactness as well as flexible user control. We applied our method to generating high quality asteroid meshes with fine surface details. CCS Concepts • Computing methodologies → Volumetric models;
{"title":"Procedural 3D Asteroid Surface Detail Synthesis","authors":"Xizhi Li, René Weller, G. Zachmann","doi":"10.2312/egs.20201020","DOIUrl":"https://doi.org/10.2312/egs.20201020","url":null,"abstract":"We present a novel noise model to procedurally generate volumetric terrain on implicit surfaces. The main idea is to combine a novel Locally Controlled 3D Spot noise (LCSN) for authoring the macro structures and 3D Gabor noise to add micro details. More specifically, a spatially-defined kernel formulation in combination with an impulse distribution enables the LCSN to generate arbitrary size craters and boulders, while the Gabor noise generates stochastic Gaussian details. The corresponding metaball positions in the underlying implicit surface preserve locality to avoid the globality of traditional procedural noise textures, which yields an essential feature that is often missing in procedural texture based terrain generators. Furthermore, different noise-based primitives are integrated through operators, i.e. blending, replacing, or warping into the complex volumetric terrain. The result is a completely implicit representation and, as such, has the advantage of compactness as well as flexible user control. We applied our method to generating high quality asteroid meshes with fine surface details. CCS Concepts • Computing methodologies → Volumetric models;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"1 1","pages":"69-72"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79184996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The modeling and rendering of hair in Computer Graphics have seen much progress in the last few years. However, modeling and rendering hair aging, visually seen as the loss of pigments, have not attracted the same attention. We introduce in this paper a biologically inspired hair aging system with two main parts: greying of individual hairs, and time evolution of greying over the scalp. The greying of individual hairs is based on current knowledge about melanin loss, whereas the evolution on the scalp is modeled by segmenting the scalp in regions and defining distinct time frames for greying to occur. Our experimental visual results present plausible results despite the relatively simple model. We validate the results by presenting side by side our results with real pictures of men at different ages.
{"title":"A Practical Male Hair Aging Model","authors":"D. Volkmann, M. Walter","doi":"10.2312/egs.20201017","DOIUrl":"https://doi.org/10.2312/egs.20201017","url":null,"abstract":"The modeling and rendering of hair in Computer Graphics have seen much progress in the last few years. However, modeling and rendering hair aging, visually seen as the loss of pigments, have not attracted the same attention. We introduce in this paper a biologically inspired hair aging system with two main parts: greying of individual hairs, and time evolution of greying over the scalp. The greying of individual hairs is based on current knowledge about melanin loss, whereas the evolution on the scalp is modeled by segmenting the scalp in regions and defining distinct time frames for greying to occur. Our experimental visual results present plausible results despite the relatively simple model. We validate the results by presenting side by side our results with real pictures of men at different ages.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"26 1","pages":"57-60"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91034635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates a first order generalization of signed distance fields. We show that we can improve accuracy and storage efficiency by incorporating the spatial derivatives of the signed distance function into the distance field samples. We show that a representation in power basis remains invariant under barycentric combination, as such, it is interpolated exactly by the GPU. Our construction is applicable in any geometric setting where point-surface distances can be queried. To emphasize the practical advantages of this approach, we apply our results to signed distance field generation from triangular meshes. We propose storage optimization approaches and offer a theoretical and empirical accuracy analysis of our proposed distance field type in relation to traditional, zero order distance fields. We show that the proposed representation may offer an order of magnitude improvement in storage while retaining the same precision as a higher resolution distance field. CCS Concepts • Computing methodologies → Ray tracing; Volumetric models;
{"title":"First Order Signed Distance Fields","authors":"Róbert Bán, Gábor Valasek","doi":"10.2312/egs.20201011","DOIUrl":"https://doi.org/10.2312/egs.20201011","url":null,"abstract":"This paper investigates a first order generalization of signed distance fields. We show that we can improve accuracy and storage efficiency by incorporating the spatial derivatives of the signed distance function into the distance field samples. We show that a representation in power basis remains invariant under barycentric combination, as such, it is interpolated exactly by the GPU. Our construction is applicable in any geometric setting where point-surface distances can be queried. To emphasize the practical advantages of this approach, we apply our results to signed distance field generation from triangular meshes. We propose storage optimization approaches and offer a theoretical and empirical accuracy analysis of our proposed distance field type in relation to traditional, zero order distance fields. We show that the proposed representation may offer an order of magnitude improvement in storage while retaining the same precision as a higher resolution distance field. CCS Concepts • Computing methodologies → Ray tracing; Volumetric models;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"12 1","pages":"33-36"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78223819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network. CCS Concepts • Computing methodologies → Image processing; • Applied computing → Fine arts;
{"title":"Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils","authors":"Kenta Akita, Yuki Morimoto, R. Tsuruno","doi":"10.2312/egs.20201023","DOIUrl":"https://doi.org/10.2312/egs.20201023","url":null,"abstract":"Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the networks are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically coloring eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our eye position estimation network. CCS Concepts • Computing methodologies → Image processing; • Applied computing → Fine arts;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"107 2 1","pages":"81-84"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85375987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Dyke, F. Zhou, Yu-Kun Lai, Paul L. Rosin, D. Guo, Kun Li, R. Marin, Jingyu Yang
{"title":"SHREC 2020 Track: Non-rigid Shape Correspondence of Physically-Based Deformations","authors":"R. Dyke, F. Zhou, Yu-Kun Lai, Paul L. Rosin, D. Guo, Kun Li, R. Marin, Jingyu Yang","doi":"10.2312/3dor.20201161","DOIUrl":"https://doi.org/10.2312/3dor.20201161","url":null,"abstract":"","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"8 1","pages":"19-26"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72822266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a course design on Non-Photorealistic Rendering (NPAR). As a sub-field of computer graphics, NPAR aims to model artistic media, styles, and techniques that capture salient characteristics in images to convey particular information or mood. The results can be just as inspiring as the photorealistic scenes produced with the latest ray-tracing techniques even though the goals are fundamentally different. The paper offers ideas for developing a full course on NPAR by presenting a series of assignments that cover a wide range of NPAR techniques and shares experience on teaching such a course at the junior/senior undergraduate level. CCS Concepts • Computing methodologies → Non-photorealistic rendering;
{"title":"Designing a Course on Non-Photorealistic Rendering","authors":"Ivaylo Ilinkin","doi":"10.2312/eged.20201028","DOIUrl":"https://doi.org/10.2312/eged.20201028","url":null,"abstract":"This paper presents a course design on Non-Photorealistic Rendering (NPAR). As a sub-field of computer graphics, NPAR aims to model artistic media, styles, and techniques that capture salient characteristics in images to convey particular information or mood. The results can be just as inspiring as the photorealistic scenes produced with the latest ray-tracing techniques even though the goals are fundamentally different. The paper offers ideas for developing a full course on NPAR by presenting a series of assignments that cover a wide range of NPAR techniques and shares experience on teaching such a course at the junior/senior undergraduate level. CCS Concepts • Computing methodologies → Non-photorealistic rendering;","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"43 1","pages":"9-16"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73801723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Biland, V. C. Azevedo, Byungsoo Kim, B. Solenthaler
Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised L1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.
{"title":"Frequency-Aware Reconstruction of Fluid Simulations with Generative Networks","authors":"Simon Biland, V. C. Azevedo, Byungsoo Kim, B. Solenthaler","doi":"10.2312/egs.20201019","DOIUrl":"https://doi.org/10.2312/egs.20201019","url":null,"abstract":"Convolutional neural networks were recently employed to fully reconstruct fluid simulation data from a set of reduced parameters. However, since (de-)convolutions traditionally trained with supervised L1-loss functions do not discriminate between low and high frequencies in the data, the error is not minimized efficiently for higher bands. This directly correlates with the quality of the perceived results, since missing high frequency details are easily noticeable. In this paper, we analyze the reconstruction quality of generative networks and present a frequency-aware loss function that is able to focus on specific bands of the dataset during training time. We show that our approach improves reconstruction quality of fluid simulation data in mid-frequency bands, yielding perceptually better results while requiring comparable training time.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"73 1","pages":"65-68"},"PeriodicalIF":0.0,"publicationDate":"2019-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80446032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generating a virtual try-on image from in-shop clothing images and a model person's snapshot is a challenging task because the human body and clothes have high flexibility in their shapes. In this paper, we develop a Virtual Try-on Generative Adversarial Network (VITON-GAN), that generates virtual try-on images using images of in-shop clothing and a model person. This method enhances the quality of the generated image when occlusion is present in a model person's image (e.g., arms crossed in front of the clothes) by adding an adversarial mechanism in the training pipeline.
{"title":"VITON-GAN: Virtual Try-on Image Generator Trained with Adversarial Loss","authors":"Shion Honda","doi":"10.2312/egp.20191043","DOIUrl":"https://doi.org/10.2312/egp.20191043","url":null,"abstract":"Generating a virtual try-on image from in-shop clothing images and a model person's snapshot is a challenging task because the human body and clothes have high flexibility in their shapes. In this paper, we develop a Virtual Try-on Generative Adversarial Network (VITON-GAN), that generates virtual try-on images using images of in-shop clothing and a model person. This method enhances the quality of the generated image when occlusion is present in a model person's image (e.g., arms crossed in front of the clothes) by adding an adversarial mechanism in the training pipeline.","PeriodicalId":72958,"journal":{"name":"Eurographics ... Workshop on 3D Object Retrieval : EG 3DOR. Eurographics Workshop on 3D Object Retrieval","volume":"116 1","pages":"9-10"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74538896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}