Marc Spicker, F. Hahn, Thomas Lindemeier, D. Saupe, O. Deussen
We investigate how the perceived abstraction quality of stipple illustrations is related to the number of points used to create them. Since it is difficult to find objective functions that quantify the visual quality of such illustrations, we gather comparative data by a crowdsourcing user study and employ a paired comparison model to deduce absolute quality values. Based on this study we show that it is possible to predict the perceived quality of stippled representations based on the properties of an input image. Our results are related to Weber-Fechner's law from psychophysics and indicate a logarithmic relation between numbers of points and perceived abstraction quality. We give guidance for the number of stipple points that is typically enough to represent an input image well.
{"title":"Quantifying visual abstraction quality for stipple drawings","authors":"Marc Spicker, F. Hahn, Thomas Lindemeier, D. Saupe, O. Deussen","doi":"10.1145/3092919.3092923","DOIUrl":"https://doi.org/10.1145/3092919.3092923","url":null,"abstract":"We investigate how the perceived abstraction quality of stipple illustrations is related to the number of points used to create them. Since it is difficult to find objective functions that quantify the visual quality of such illustrations, we gather comparative data by a crowdsourcing user study and employ a paired comparison model to deduce absolute quality values. Based on this study we show that it is possible to predict the perceived quality of stippled representations based on the properties of an input image. Our results are related to Weber-Fechner's law from psychophysics and indicate a logarithmic relation between numbers of points and perceived abstraction quality. We give guidance for the number of stipple points that is typically enough to represent an input image well.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114742723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao-Chang Liu, Ming-Ming Cheng, Yu-Kun Lai, Paul L. Rosin
Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.
{"title":"Depth-aware neural style transfer","authors":"Xiao-Chang Liu, Ming-Ming Cheng, Yu-Kun Lai, Paul L. Rosin","doi":"10.1145/3092919.3092924","DOIUrl":"https://doi.org/10.1145/3092919.3092924","url":null,"abstract":"Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121763882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Whole-cloth quilts are decorative and functional artifacts made of plain cloth embellished with complicated stitching patterns. We describe a method that can automatically create a sewing pattern for a whole-cloth quilt from a photograph. Our technique begins with a segmented image, extracts desired and optional edges, and creates a continuous sewing path by approximately solving the Rural Postman Problem (RPP). In addition to many example quilts, we provide visual and numerical comparisons to previous singleline illustration approaches.
{"title":"Whole-cloth quilting patterns from photographs","authors":"Chenxi Liu, J. Hodgins, J. McCann","doi":"10.1145/3092919.3092925","DOIUrl":"https://doi.org/10.1145/3092919.3092925","url":null,"abstract":"Whole-cloth quilts are decorative and functional artifacts made of plain cloth embellished with complicated stitching patterns. We describe a method that can automatically create a sewing pattern for a whole-cloth quilt from a photograph. Our technique begins with a segmented image, extracts desired and optional edges, and creates a continuous sewing path by approximately solving the Rural Postman Problem (RPP). In addition to many example quilts, we provide visual and numerical comparisons to previous singleline illustration approaches.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"128 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133391071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Santiago E. Montesdeoca, S. H. Soon, P. Bénard, Romain Vergne, J. Thollot, Hannes Rall, Davide Benvenuti
We investigate characteristic edge- and substrate-based effects for watercolor stylization. These two fundamental elements of painted art play a significant role in traditional watercolors and highly influence the pigment's behavior and application. Yet a detailed consideration of these specific elements for the stylization of 3D scenes has not been attempted before. Through this investigation, we contribute to the field by presenting ways to emulate two novel effects: dry-brush and gaps & overlaps. By doing so, we also found ways to improve upon well-studied watercolor effects such as edge-darkening and substrate granulation. Finally, we integrated controllable external lighting influences over the watercolorized result, together with other previously researched watercolor effects. These effects are combined through a direct stylization pipeline to produce sophisticated watercolor imagery, which retains spatial coherence in object-space and is locally controllable in real-time.
{"title":"Edge- and substrate-based effects for watercolor stylization","authors":"Santiago E. Montesdeoca, S. H. Soon, P. Bénard, Romain Vergne, J. Thollot, Hannes Rall, Davide Benvenuti","doi":"10.1145/3092919.3092928","DOIUrl":"https://doi.org/10.1145/3092919.3092928","url":null,"abstract":"We investigate characteristic edge- and substrate-based effects for watercolor stylization. These two fundamental elements of painted art play a significant role in traditional watercolors and highly influence the pigment's behavior and application. Yet a detailed consideration of these specific elements for the stylization of 3D scenes has not been attempted before. Through this investigation, we contribute to the field by presenting ways to emulate two novel effects: dry-brush and gaps & overlaps. By doing so, we also found ways to improve upon well-studied watercolor effects such as edge-darkening and substrate granulation. Finally, we integrated controllable external lighting influences over the watercolorized result, together with other previously researched watercolor effects. These effects are combined through a direct stylization pipeline to produce sophisticated watercolor imagery, which retains spatial coherence in object-space and is locally controllable in real-time.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125226663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.
{"title":"Neural style transfer: a paradigm shift for image-based artistic rendering?","authors":"Amir Semmo, Tobias Isenberg, J. Döllner","doi":"10.1145/3092919.3092920","DOIUrl":"https://doi.org/10.1145/3092919.3092920","url":null,"abstract":"In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134079374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Panorama maps are stylized paintings of terrain often seen at tourist destinations. They are difficult to create since they are both artistic and grounded in real geographic data. In this paper we present techniques for rendering real-world data in the style of Heinrich Berann's panorama maps in a real-time application. We analyse several of Berann's paintings to identify the artistic elements used. We use this analysis to form algorithms that mimic the panorama map style, focusing on replicating the terrain deformation, distorted projection, terrain colouring, tree brush strokes, water rendering, and atmospheric scattering. In our approach we use freely available digital earth data to render interactive panorama maps without needing further design work.
{"title":"Real-time panorama maps","authors":"S. Brown, F. Samavati","doi":"10.1145/3092919.3092922","DOIUrl":"https://doi.org/10.1145/3092919.3092922","url":null,"abstract":"Panorama maps are stylized paintings of terrain often seen at tourist destinations. They are difficult to create since they are both artistic and grounded in real geographic data. In this paper we present techniques for rendering real-world data in the style of Heinrich Berann's panorama maps in a real-time application. We analyse several of Berann's paintings to identify the artistic elements used. We use this analysis to form algorithms that mimic the panorama map style, focusing on replicating the terrain deformation, distorted projection, terrain colouring, tree brush strokes, water rendering, and atmospheric scattering. In our approach we use freely available digital earth data to render interactive panorama maps without needing further design work.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117291962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sylvain Duchêne, Carlos Aliaga, T. Pouli, P. Pérez
Colorists often use keying or rotoscoping tools to access and edit particular colors or parts of the scene. Although necessary, this is a time-consuming and potentially imprecise process, as it is not possible to fully separate the influence of light sources in the scene from the colors of objects and actors within it. To simplify this process, we present a new solution for automatically estimating the color and influence of multiple illuminants, based on image variation analysis. Using this information, we present a new color grading tool for simply and interactively editing the colors of detected illuminants, which fits naturally in color grading workflows. We demonstrate the use of our solution in several scenes, evaluating the quality of our results by means of a psychophysical study.
{"title":"Mixed illumination analysis in single image for interactive color grading","authors":"Sylvain Duchêne, Carlos Aliaga, T. Pouli, P. Pérez","doi":"10.1145/3092919.3092927","DOIUrl":"https://doi.org/10.1145/3092919.3092927","url":null,"abstract":"Colorists often use keying or rotoscoping tools to access and edit particular colors or parts of the scene. Although necessary, this is a time-consuming and potentially imprecise process, as it is not possible to fully separate the influence of light sources in the scene from the colors of objects and actors within it. To simplify this process, we present a new solution for automatically estimating the color and influence of multiple illuminants, based on image variation analysis. Using this information, we present a new color grading tool for simply and interactively editing the colors of detected illuminants, which fits naturally in color grading workflows. We demonstrate the use of our solution in several scenes, evaluating the quality of our results by means of a psychophysical study.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130131363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Structural properties are important clues for non-photorealistic representations of digital images. Therefore, image analysis tools have been intensively used either to produce stroke-based renderings or to yield abstractions of images. In this work, we propose to use a hierarchical and geometrical image representation, called a topographic map, made of shapes organized in a tree structure. There are two main advantages of this analysis tool. Firstly, it is able to deal with all scales, so that every shape of the input image is represented. Secondly, it accounts for the inclusion properties within the image. By iteratively performing simple local operations on the shapes (removal, rotation, scaling, replacement...), we are able to generate abstract renderings of digital photographs ranging from geometrical abstraction and painting-like effects to style transfer, using the same framework. In particular, results show that it is possible to create abstract images evoking Malevitchs Suprematist school, while remaining grounded in the structure of digital images, by replacing all the shapes in the tree by simple geometric shapes.
{"title":"A generic framework for the structured abstraction of images","authors":"Noura Faraj, Gui-Song Xia, J. Delon, Y. Gousseau","doi":"10.1145/3092919.3092930","DOIUrl":"https://doi.org/10.1145/3092919.3092930","url":null,"abstract":"Structural properties are important clues for non-photorealistic representations of digital images. Therefore, image analysis tools have been intensively used either to produce stroke-based renderings or to yield abstractions of images. In this work, we propose to use a hierarchical and geometrical image representation, called a topographic map, made of shapes organized in a tree structure. There are two main advantages of this analysis tool. Firstly, it is able to deal with all scales, so that every shape of the input image is represented. Secondly, it accounts for the inclusion properties within the image. By iteratively performing simple local operations on the shapes (removal, rotation, scaling, replacement...), we are able to generate abstract renderings of digital photographs ranging from geometrical abstraction and painting-like effects to style transfer, using the same framework. In particular, results show that it is possible to create abstract images evoking Malevitchs Suprematist school, while remaining grounded in the structure of digital images, by replacing all the shapes in the tree by simple geometric shapes.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124954569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul L. Rosin, D. Mould, Itamar Berger, J. Collomosse, Yu-Kun Lai, Chuan Li, Hua Li, Ariel Shamir, Michael Wand, T. Wang, H. Winnemöller
We present a set of images for helping NPR practitioners evaluate their image-based portrait stylisation algorithms. Using a standard set both facilitates comparisons with other methods and helps ensure that presented results are representative. We give two levels of difficulty, each consisting of 20 images selected systematically so as to provide good coverage of several possible portrait characteristics. We applied three existing portrait-specific stylisation algorithms, two general-purpose stylisation algorithms, and one general learning based stylisation algorithm to the first level of the benchmark, corresponding to the type of constrained images that have often been used in portrait-specific work. We found that the existing methods are generally effective on this new image set, demonstrating that level one of the benchmark is tractable; challenges remain at level two. Results revealed several advantages conferred by portrait-specific algorithms over general-purpose algorithms: portrait-specific algorithms can use domain-specific information to preserve key details such as eyes and to eliminate extraneous details, and they have more scope for semantically meaningful abstraction due to the underlying face model. Finally, we provide some thoughts on systematically extending the benchmark to higher levels of difficulty.
{"title":"Benchmarking non-photorealistic rendering of portraits","authors":"Paul L. Rosin, D. Mould, Itamar Berger, J. Collomosse, Yu-Kun Lai, Chuan Li, Hua Li, Ariel Shamir, Michael Wand, T. Wang, H. Winnemöller","doi":"10.1145/3092919.3092921","DOIUrl":"https://doi.org/10.1145/3092919.3092921","url":null,"abstract":"We present a set of images for helping NPR practitioners evaluate their image-based portrait stylisation algorithms. Using a standard set both facilitates comparisons with other methods and helps ensure that presented results are representative. We give two levels of difficulty, each consisting of 20 images selected systematically so as to provide good coverage of several possible portrait characteristics. We applied three existing portrait-specific stylisation algorithms, two general-purpose stylisation algorithms, and one general learning based stylisation algorithm to the first level of the benchmark, corresponding to the type of constrained images that have often been used in portrait-specific work. We found that the existing methods are generally effective on this new image set, demonstrating that level one of the benchmark is tractable; challenges remain at level two. Results revealed several advantages conferred by portrait-specific algorithms over general-purpose algorithms: portrait-specific algorithms can use domain-specific information to preserve key details such as eyes and to eliminate extraneous details, and they have more scope for semantically meaningful abstraction due to the underlying face model. Finally, we provide some thoughts on systematically extending the benchmark to higher levels of difficulty.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128008671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The color palette used by an artist when creating a painting is an important tool for expressing emotion, directing attention, and more. However, choosing a palette is an intricate task that requires considerable skill and experience. In this work, we introduce a new tool designed to allow artists to experiment with alternative color palettes for existing watercolor paintings. This could be useful for generating alternative renditions for an existing painting, or for aiding in the selection of a palette for a new painting, related to an existing one. Our tool first estimates the original pigment-based color palette used to create the painting, and then decomposes the painting into a collection of pigment channels, each corresponding to a single palette color. In both of these tasks, we employ a version of the Kubelka-Munk model, which predicts the reflectance of a given mixture of pigments. Each channel in the decomposition is a piecewise-smooth map that specifies the concentration of one of the colors in the palette across the image. Another estimated map specifies the total thickness of the pigments across the image. The mixture of these pigment channels, also according to the Kubelka-Munk model, reconstructs the original painting. The artist is then able to manipulate the individual palette colors, obtaining results by remixing the pigment channels at interactive rates.
{"title":"Pigment-based recoloring of watercolor paintings","authors":"Elad Aharoni-Mack, Yakov Shambik, Dani Lischinski","doi":"10.1145/3092919.3092926","DOIUrl":"https://doi.org/10.1145/3092919.3092926","url":null,"abstract":"The color palette used by an artist when creating a painting is an important tool for expressing emotion, directing attention, and more. However, choosing a palette is an intricate task that requires considerable skill and experience. In this work, we introduce a new tool designed to allow artists to experiment with alternative color palettes for existing watercolor paintings. This could be useful for generating alternative renditions for an existing painting, or for aiding in the selection of a palette for a new painting, related to an existing one. Our tool first estimates the original pigment-based color palette used to create the painting, and then decomposes the painting into a collection of pigment channels, each corresponding to a single palette color. In both of these tasks, we employ a version of the Kubelka-Munk model, which predicts the reflectance of a given mixture of pigments. Each channel in the decomposition is a piecewise-smooth map that specifies the concentration of one of the colors in the palette across the image. Another estimated map specifies the total thickness of the pigments across the image. The mixture of these pigment channels, also according to the Kubelka-Munk model, reconstructs the original painting. The artist is then able to manipulate the individual palette colors, obtaining results by remixing the pigment channels at interactive rates.","PeriodicalId":204343,"journal":{"name":"International Symposium on Non-Photorealistic Animation and Rendering","volume":"511 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123069448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}