In this work, we present a simple mathematical approach to art directed shader development. We have tested this approach over two semesters in an introductory level graduate rendering & shading class at Texas A&M University. The students in the class each chose an artist's style to mimic, and then easily created rendered images strongly resembling that style (see Figures 1). The method provides shader developers an intuitive process, giving them a high level of visual control in the creation of stylized depictions.
{"title":"Art directed rendering & shading using control images","authors":"E. Akleman, Siran Liu, D. House","doi":"10.1145/2787626.2792612","DOIUrl":"https://doi.org/10.1145/2787626.2792612","url":null,"abstract":"In this work, we present a simple mathematical approach to art directed shader development. We have tested this approach over two semesters in an introductory level graduate rendering & shading class at Texas A&M University. The students in the class each chose an artist's style to mimic, and then easily created rendered images strongly resembling that style (see Figures 1). The method provides shader developers an intuitive process, giving them a high level of visual control in the creation of stylized depictions.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing quality of real-time graphics it is vital to make sure assets move in a convincing manner otherwise the players immersion can be broken. Grass is an important area as it can move substantially and often takes up a large portion of screen space in games. Animation of grass is a subject to academic research [Fernando 2004; Perbet and Cani 2001] as well as a technology which is implemented in a number of video games. The list includes, but is not limited to, games such as Far Cry 4, Battlefield 4, Dear Esther and Unigine Valley. Comparing video games assets with reality, it can be seen that the current methods have a number of problems which decrease the realism of the resulting grass animation. These problems include: 1) the visible planar nature of grass geometry and 2) problems with the grass movement which include over-connectivity of grass blades in respect to their neighbours, no obvious wind direction and exaggerated swaying motions. In this paper we propose to increase realism of the grass by focusing on its movement. The main contributions of this work are: 1) Distinguishing ambient and directional components of the wind and 2) The method for calculating directional wind by using a grayscale map and wind vector. The grass was implemented with vertex shaders in line with the majority of methods described in academic literature (e.g. [Fernando 2004]) and implemented in modern games.
{"title":"Increasing realism of animated grass in real-time game environments","authors":"Benjamin Knowles, O. Fryazinov","doi":"10.1145/2787626.2787660","DOIUrl":"https://doi.org/10.1145/2787626.2787660","url":null,"abstract":"With the increasing quality of real-time graphics it is vital to make sure assets move in a convincing manner otherwise the players immersion can be broken. Grass is an important area as it can move substantially and often takes up a large portion of screen space in games. Animation of grass is a subject to academic research [Fernando 2004; Perbet and Cani 2001] as well as a technology which is implemented in a number of video games. The list includes, but is not limited to, games such as Far Cry 4, Battlefield 4, Dear Esther and Unigine Valley. Comparing video games assets with reality, it can be seen that the current methods have a number of problems which decrease the realism of the resulting grass animation. These problems include: 1) the visible planar nature of grass geometry and 2) problems with the grass movement which include over-connectivity of grass blades in respect to their neighbours, no obvious wind direction and exaggerated swaying motions. In this paper we propose to increase realism of the grass by focusing on its movement. The main contributions of this work are: 1) Distinguishing ambient and directional components of the wind and 2) The method for calculating directional wind by using a grayscale map and wind vector. The grass was implemented with vertex shaders in line with the majority of methods described in academic literature (e.g. [Fernando 2004]) and implemented in modern games.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133274530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Rakita, T. Pejsa, Bilge Mutlu, Michael Gleicher
Motion-captured performances seldom include eye gaze, because capturing this motion requires eye tracking technology that is not typically part of a motion capture setup. Yet having eye gaze information is important, as it tells us what the actor was attending to during capture and it adds to the expressivity of their performance.
{"title":"Inferring gaze shifts from captured body motion","authors":"D. Rakita, T. Pejsa, Bilge Mutlu, Michael Gleicher","doi":"10.1145/2787626.2787663","DOIUrl":"https://doi.org/10.1145/2787626.2787663","url":null,"abstract":"Motion-captured performances seldom include eye gaze, because capturing this motion requires eye tracking technology that is not typically part of a motion capture setup. Yet having eye gaze information is important, as it tells us what the actor was attending to during capture and it adds to the expressivity of their performance.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131744074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Engineering documents e.g. 'blueprints' are one of the traditional forms of paper based information moving more to the digital realm. With mobile and the evolution of GPUs on mobile, there are tremendous opportunities for applications that view and interact with engineering documents.
{"title":"Performance and precision: mobile solutions for high quality engineering drawings","authors":"R. Krishnaswamy","doi":"10.1145/2787626.2792615","DOIUrl":"https://doi.org/10.1145/2787626.2792615","url":null,"abstract":"Engineering documents e.g. 'blueprints' are one of the traditional forms of paper based information moving more to the digital realm. With mobile and the evolution of GPUs on mobile, there are tremendous opportunities for applications that view and interact with engineering documents.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121190983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cryotherapy is a rapidly growing minimally invasive technique for the treatment of different kinds of tumors, such as breast cancer, renal and prostate cancer. Several hollow needles are percutaneously inserted in the target area under image guidance and a gas (usually argon) is then decompressed inside the needles. Based on the Thompson-Joule principle, the temperature drops drown and a ball of ice crystals forms around the tip of each needle. Radiologists rely on the geometry of this iceball (273 K), visible on computer tomographic (CT) or magnetic resonance (MR) images, to assess the status of the ablation. However, cellular death only occurs when the temperature falls below 233 K. The complexity of the procedure therefore resides in planning the optimal number, position and orientation of the needles required to treat the tumor, while avoiding any damage to the surrounding healthy tissues.
{"title":"Augmented reality for cryoablation procedures","authors":"Hugo Talbot, Frédérick Roy, S. Cotin","doi":"10.1145/2787626.2792649","DOIUrl":"https://doi.org/10.1145/2787626.2792649","url":null,"abstract":"Cryotherapy is a rapidly growing minimally invasive technique for the treatment of different kinds of tumors, such as breast cancer, renal and prostate cancer. Several hollow needles are percutaneously inserted in the target area under image guidance and a gas (usually argon) is then decompressed inside the needles. Based on the Thompson-Joule principle, the temperature drops drown and a ball of ice crystals forms around the tip of each needle. Radiologists rely on the geometry of this iceball (273 K), visible on computer tomographic (CT) or magnetic resonance (MR) images, to assess the status of the ablation. However, cellular death only occurs when the temperature falls below 233 K. The complexity of the procedure therefore resides in planning the optimal number, position and orientation of the needles required to treat the tumor, while avoiding any damage to the surrounding healthy tissues.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114699358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a common philosophical question as to whether your blue is the same as my blue. The two-tone striped dress shown in Figure 1, which attracted a lot of attention on the Internet, gave us a clear answer: "No." Some people see the dress as blue and black, whereas others insist it's white and gold. So your blue can be my white. Why is it that people looking at the same picture perceive totally different color combinations?
{"title":"Color perception difference: white and gold, or black and blue?","authors":"Hisashi Watanabe, Toshiya Fujii, Tatsuya Nakamura, Tsuguhiro Korenaga","doi":"10.1145/2787626.2787630","DOIUrl":"https://doi.org/10.1145/2787626.2787630","url":null,"abstract":"It is a common philosophical question as to whether your blue is the same as my blue. The two-tone striped dress shown in Figure 1, which attracted a lot of attention on the Internet, gave us a clear answer: \"No.\" Some people see the dress as blue and black, whereas others insist it's white and gold. So your blue can be my white. Why is it that people looking at the same picture perceive totally different color combinations?","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127456606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinhong Park, Minkyu Kim, Sunho Ki, Youngduke Seo, Chulho Shin
Although the mobile industry has recently begun trending towards high quality graphics content, it is still difficult to satisfy this trend due to performance, power and thermal issue of GPU/CPU in mobile application processor.
{"title":"Half frame forwarding: frame-rate up conversion for tiled rendering GPU","authors":"Jinhong Park, Minkyu Kim, Sunho Ki, Youngduke Seo, Chulho Shin","doi":"10.1145/2787626.2787634","DOIUrl":"https://doi.org/10.1145/2787626.2787634","url":null,"abstract":"Although the mobile industry has recently begun trending towards high quality graphics content, it is still difficult to satisfy this trend due to performance, power and thermal issue of GPU/CPU in mobile application processor.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123080713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the process of editorial design, a harmonious match between a picture and a solid color is often essential to achieve a high quality of a graphical art work. Color as such is a compelling cue to elicit emotional responses and thus can enhance the emotional quality of an image. Tools and methods have been developed to automatize the color selection process, and a noticeable progress has been achieved to extract perceptually dominant colors of an image. However, little attention has been paid to the emotional characteristics of selected colors, and it has been highly relying on the color designers' manual judgments. In this study, we propose a computational method that creates a color that enhances both aesthetic and affective quality of an image, and call it a theme color.
{"title":"Hue extraction and tone match: generating a theme color to enhance the emotional quality of an image","authors":"Eunjin Kim, Hyeon‐Jeong Suk","doi":"10.1145/2787626.2787657","DOIUrl":"https://doi.org/10.1145/2787626.2787657","url":null,"abstract":"In the process of editorial design, a harmonious match between a picture and a solid color is often essential to achieve a high quality of a graphical art work. Color as such is a compelling cue to elicit emotional responses and thus can enhance the emotional quality of an image. Tools and methods have been developed to automatize the color selection process, and a noticeable progress has been achieved to extract perceptually dominant colors of an image. However, little attention has been paid to the emotional characteristics of selected colors, and it has been highly relying on the color designers' manual judgments. In this study, we propose a computational method that creates a color that enhances both aesthetic and affective quality of an image, and call it a theme color.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131026776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In clinical practice, when a subject is imaged (i.e. CT scan or MRI) the result is a 3D image of volumetric data. In order to study the organ, bone, or other object of interest, this data needs to be segmented to obtain a 3D model that can be used in any number of down stream applications. When used for treatment planning these segmentations need to not only be accurate but also produced quickly to avoid health risks. Automatic segmentation methods are becoming more reliable but many experts in the scientific community still rely on time consuming manual segmentation.
{"title":"Contour guided surface deformation for volumetric segmentation","authors":"M. Holloway, T. Ju, C. Grimm","doi":"10.1145/2787626.2792638","DOIUrl":"https://doi.org/10.1145/2787626.2792638","url":null,"abstract":"In clinical practice, when a subject is imaged (i.e. CT scan or MRI) the result is a 3D image of volumetric data. In order to study the organ, bone, or other object of interest, this data needs to be segmented to obtain a 3D model that can be used in any number of down stream applications. When used for treatment planning these segmentations need to not only be accurate but also produced quickly to avoid health risks. Automatic segmentation methods are becoming more reliable but many experts in the scientific community still rely on time consuming manual segmentation.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131256121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling 3D trees is a major theme in the field of computer graphics [Steven et al. 2012]. However, there has been little research on generating illustrations of trees [Yu-Sheng et al. 2012]. One of the ways to generate them is to render their 3D models. However, it is difficult to obtain the characteristic flat representation of illustrations because of the concentration of foliage in the central part of the tree. We present a system to generate a wide variety of tree illustrations by controlling the density of branches, the shape of canopy, and the overlap of flowers and leaves (Fig. 1).
三维树建模是计算机图形学领域的一个重要主题[Steven et al. 2012]。然而,关于生成树木插图的研究却很少[Yu-Sheng et al. 2012]。生成它们的方法之一是渲染它们的3D模型。然而,由于树木的中心部分集中了树叶,因此很难获得插图所特有的平面表示。我们提出了一个系统,通过控制树枝的密度、树冠的形状以及花和叶的重叠来生成各种各样的树木插图(图1)。
{"title":"Interactive tree illustration generation system","authors":"Azusa Mama, Yuki Morimoto, K. Nakajima","doi":"10.1145/2787626.2792652","DOIUrl":"https://doi.org/10.1145/2787626.2792652","url":null,"abstract":"Modeling 3D trees is a major theme in the field of computer graphics [Steven et al. 2012]. However, there has been little research on generating illustrations of trees [Yu-Sheng et al. 2012]. One of the ways to generate them is to render their 3D models. However, it is difficult to obtain the characteristic flat representation of illustrations because of the concentration of foliage in the central part of the tree. We present a system to generate a wide variety of tree illustrations by controlling the density of branches, the shape of canopy, and the overlap of flowers and leaves (Fig. 1).","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132012337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}