首页 > 最新文献

ACM SIGGRAPH 2015 Posters最新文献

英文 中文
Paint-like compositing based on RYB color model 基于RYB颜色模型的类绘画合成
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792648
Junichi Sugita, Tokiichiro Takahashi
Many people have been familiar with subtractive color model based on pigment color compositing since their early childhood. However, the RGB color space is not comprehensible for children due to additive color compositing. In the RGB color space, the resulting mixture color is often different from colors viewer expected. CMYK is a well-known subtractive color space, but its three primal colors are not familiar. Kubelka-Munk model (KM model in short) simulates pigment compositing as well as paint-like appearance by physically-based simulation. However, it is difficult to use KM model because of many simulation parameters.
很多人从小就熟悉以颜料合成为基础的减色模型。然而,由于加色合成,RGB色彩空间对儿童来说是不可理解的。在RGB色彩空间中,所产生的混合色通常与观看者期望的颜色不同。CMYK是一种众所周知的减色空间,但它的三种原色并不为人所熟悉。Kubelka-Munk模型(简称KM模型)通过基于物理的模拟来模拟颜料的合成以及类似油漆的外观。然而,由于仿真参数较多,KM模型难以应用。
{"title":"Paint-like compositing based on RYB color model","authors":"Junichi Sugita, Tokiichiro Takahashi","doi":"10.1145/2787626.2792648","DOIUrl":"https://doi.org/10.1145/2787626.2792648","url":null,"abstract":"Many people have been familiar with subtractive color model based on pigment color compositing since their early childhood. However, the RGB color space is not comprehensible for children due to additive color compositing. In the RGB color space, the resulting mixture color is often different from colors viewer expected. CMYK is a well-known subtractive color space, but its three primal colors are not familiar. Kubelka-Munk model (KM model in short) simulates pigment compositing as well as paint-like appearance by physically-based simulation. However, it is difficult to use KM model because of many simulation parameters.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125052667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spherical light field environment capture for virtual reality using a motorized pan/tilt head and offset camera 球面光场环境捕获的虚拟现实使用机动平移/倾斜头和偏移相机
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787648
P. Debevec, G. Downing, M. Bolas, Hsuen-Yueh Peng, Jules Urbach
Todays most compelling virtual reality experiences shift the users viewpoint within the virtual environment based on input from a head-tracking system, giving a compelling sense of motion parallax. While this is straightforward for computer generated scenes, photographic VR content generally does not provide motion parallax in response to head motion. Even 360° stereo panoramas, which offer separated left and right views, fail to allow the vantage point to change in response to head motion.
如今,大多数引人注目的虚拟现实体验都是基于头部跟踪系统的输入,在虚拟环境中改变用户的视角,从而产生引人注目的运动视差感。虽然这对于计算机生成的场景来说很简单,但摄影VR内容通常不会提供运动视差来响应头部运动。即使是360度立体全景,提供分开的左右视图,也不能让有利位置随着头部运动而改变。
{"title":"Spherical light field environment capture for virtual reality using a motorized pan/tilt head and offset camera","authors":"P. Debevec, G. Downing, M. Bolas, Hsuen-Yueh Peng, Jules Urbach","doi":"10.1145/2787626.2787648","DOIUrl":"https://doi.org/10.1145/2787626.2787648","url":null,"abstract":"Todays most compelling virtual reality experiences shift the users viewpoint within the virtual environment based on input from a head-tracking system, giving a compelling sense of motion parallax. While this is straightforward for computer generated scenes, photographic VR content generally does not provide motion parallax in response to head motion. Even 360° stereo panoramas, which offer separated left and right views, fail to allow the vantage point to change in response to head motion.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128435881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Enhancing time and space efficiency of kd-tree for ray-tracing static scenes 提高kd-tree静态场景光线追踪的时间和空间效率
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787658
Byeongjun Choi, Woong Seo, I. Ihm
In the ray-tracing community, the surface-area heuristic (SAH) has been employed as a de facto standard strategy for building a high-quality kd-tree. Aiming to improve both time and space efficiency of the conventional SAH-based kd-tree in ray tracing, we propose to use an extended kd-tree representation for which an effective tree-construction algorithm is provided. Our experiments with several test scenes revealed that the presented kd-tree scheme significantly reduced the memory requirement for representing the tree structure, while also increasing the overall frame rate for rendering.
在光线追踪社区中,表面积启发式(SAH)已被用作构建高质量kd树的事实上的标准策略。为了提高传统的基于sah的kd-tree在光线追踪中的时间和空间效率,我们提出了一种扩展的kd-tree表示,并提供了一种有效的树构造算法。我们对几个测试场景的实验表明,所提出的kd-tree方案显着降低了表示树结构的内存需求,同时也提高了渲染的整体帧率。
{"title":"Enhancing time and space efficiency of kd-tree for ray-tracing static scenes","authors":"Byeongjun Choi, Woong Seo, I. Ihm","doi":"10.1145/2787626.2787658","DOIUrl":"https://doi.org/10.1145/2787626.2787658","url":null,"abstract":"In the ray-tracing community, the surface-area heuristic (SAH) has been employed as a de facto standard strategy for building a high-quality kd-tree. Aiming to improve both time and space efficiency of the conventional SAH-based kd-tree in ray tracing, we propose to use an extended kd-tree representation for which an effective tree-construction algorithm is provided. Our experiments with several test scenes revealed that the presented kd-tree scheme significantly reduced the memory requirement for representing the tree structure, while also increasing the overall frame rate for rendering.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rigid fluid 刚性流体
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787654
Yu Wang, M. Olano
We present a framework for modeling solid-fluid phase change. Our framework is physically-motivated, with geometric constraints applied to define rigid dynamics using shape matching. In each simulation step, particle positions are updated using an extended SPH solver where they are treated as fluid. Then a geometric constraint is computed based on current particle configuration, which consists of an optimal translation and an optimal rotation. Our approach differs from methods such as [Carlson et al. 2004] in that we solve rigid dynamics by using a stable geometric constraint [Müller et al. 2005] embedded in a fluid simulator.
我们提出了一个模拟固-流相变的框架。我们的框架是物理驱动的,使用几何约束来定义使用形状匹配的刚性动力学。在每个模拟步骤中,使用扩展SPH求解器更新粒子位置,其中它们被视为流体。然后基于当前粒子的构型计算几何约束,该约束由最优平移和最优旋转组成。我们的方法与[Carlson et al. 2004]等方法不同,因为我们通过使用嵌入在流体模拟器中的稳定几何约束[m ller et al. 2005]来求解刚性动力学。
{"title":"Rigid fluid","authors":"Yu Wang, M. Olano","doi":"10.1145/2787626.2787654","DOIUrl":"https://doi.org/10.1145/2787626.2787654","url":null,"abstract":"We present a framework for modeling solid-fluid phase change. Our framework is physically-motivated, with geometric constraints applied to define rigid dynamics using shape matching. In each simulation step, particle positions are updated using an extended SPH solver where they are treated as fluid. Then a geometric constraint is computed based on current particle configuration, which consists of an optimal translation and an optimal rotation. Our approach differs from methods such as [Carlson et al. 2004] in that we solve rigid dynamics by using a stable geometric constraint [Müller et al. 2005] embedded in a fluid simulator.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130160212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
First-person view animation editing utilizing video see-through augmented reality 第一人称视角动画编辑利用视频透视增强现实
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787656
Liang-Chen Wu, Jia-Ye Li, Yu-Hsuan Huang, M. Ouhyoung
In making 3D animation with traditional method, we usually edit 3D objects in 3-dimension space on the screen; therefore, we have to use input devices to edit and to observe 3D models. However, those processes can be improved. With the improvement in gesture recognition nowadays, virtual information operations are no longer confined to the mouse and keyboard. We can use the recognized gestures to apply to difficult operations in editing model motion. And for observing 3D model, we would use head tracking from external devices to improve it. It would be easy to observe the interactive results without complicated operation because the system will accurately map the real world head movements.
在传统的制作三维动画的方法中,我们通常在屏幕上的三维空间中编辑三维对象;因此,我们必须使用输入设备来编辑和观察3D模型。然而,这些过程可以得到改进。随着手势识别技术的进步,虚拟信息操作不再局限于鼠标和键盘。我们可以将识别的手势应用于编辑模型运动中的难点操作。对于观察3D模型,我们会使用外部设备的头部跟踪来改进它。由于该系统将准确地映射真实世界的头部运动,因此无需复杂的操作即可轻松观察交互结果。
{"title":"First-person view animation editing utilizing video see-through augmented reality","authors":"Liang-Chen Wu, Jia-Ye Li, Yu-Hsuan Huang, M. Ouhyoung","doi":"10.1145/2787626.2787656","DOIUrl":"https://doi.org/10.1145/2787626.2787656","url":null,"abstract":"In making 3D animation with traditional method, we usually edit 3D objects in 3-dimension space on the screen; therefore, we have to use input devices to edit and to observe 3D models. However, those processes can be improved. With the improvement in gesture recognition nowadays, virtual information operations are no longer confined to the mouse and keyboard. We can use the recognized gestures to apply to difficult operations in editing model motion. And for observing 3D model, we would use head tracking from external devices to improve it. It would be easy to observe the interactive results without complicated operation because the system will accurately map the real world head movements.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132976718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating near-field VR using stop motion characters and a touch of light-field rendering 使用定格动画角色和光场渲染来创建近场VR
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787640
M. Bolas, Ashok Kuruvilla, Shravani Chintalapudi, Fernando Rabelo, V. Lympouridis, Christine Barron, Evan A. Suma, Catalina Matamoros, Cristina Brous, Alicja Jasina, Yawen Zheng, Andrew Jones, P. Debevec, D. Krum
There is rapidly growing interest in the creation of rendered environments and content for tracked head-mounted stereoscopic displays for virtual reality. Currently, the most popular approaches include polygonal environments created with game engines, as well as 360 degree spherical cameras used to capture live action video. These tools were not originally designed to leverage the more complex visual cues available in VR when users laterally shift viewpoints, manually interact with models, and employ stereoscopic vision. There is a need for a fresh look at graphics techniques that can capitalize upon the unique affordances that make VR so compelling.
人们对为虚拟现实跟踪头戴式立体显示器创建渲染环境和内容的兴趣迅速增长。目前,最流行的方法包括使用游戏引擎创建多边形环境,以及用于捕捉实景视频的360度球形摄像机。当用户横向移动视点、手动与模型交互和使用立体视觉时,这些工具最初并不是为了利用VR中更复杂的视觉线索而设计的。我们需要一种全新的图像技术,可以利用VR如此引人注目的独特功能。
{"title":"Creating near-field VR using stop motion characters and a touch of light-field rendering","authors":"M. Bolas, Ashok Kuruvilla, Shravani Chintalapudi, Fernando Rabelo, V. Lympouridis, Christine Barron, Evan A. Suma, Catalina Matamoros, Cristina Brous, Alicja Jasina, Yawen Zheng, Andrew Jones, P. Debevec, D. Krum","doi":"10.1145/2787626.2787640","DOIUrl":"https://doi.org/10.1145/2787626.2787640","url":null,"abstract":"There is rapidly growing interest in the creation of rendered environments and content for tracked head-mounted stereoscopic displays for virtual reality. Currently, the most popular approaches include polygonal environments created with game engines, as well as 360 degree spherical cameras used to capture live action video. These tools were not originally designed to leverage the more complex visual cues available in VR when users laterally shift viewpoints, manually interact with models, and employ stereoscopic vision. There is a need for a fresh look at graphics techniques that can capitalize upon the unique affordances that make VR so compelling.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132021064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The XML3D architecture XML3D架构
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792623
K. Sons, F. Klein, Jan Sutter, P. Slusallek
Graphics hardware has become ubiquitous: Integrated into CPUs and into mobile devices and recently even embedded into cars. With the advent of WebGL, accelerated graphics is finally accessible from within the web browser. However, still the capabilities of GPUs are almost exclusively exploited by the video game industry, where experts produce specialized content for game engines.
图形硬件已经无处不在:集成到cpu和移动设备中,最近甚至嵌入到汽车中。随着WebGL的出现,加速图形最终可以在web浏览器中访问。然而,gpu的能力仍然几乎只被电子游戏行业所利用,专家们为游戏引擎制作专门的内容。
{"title":"The XML3D architecture","authors":"K. Sons, F. Klein, Jan Sutter, P. Slusallek","doi":"10.1145/2787626.2792623","DOIUrl":"https://doi.org/10.1145/2787626.2792623","url":null,"abstract":"Graphics hardware has become ubiquitous: Integrated into CPUs and into mobile devices and recently even embedded into cars. With the advent of WebGL, accelerated graphics is finally accessible from within the web browser. However, still the capabilities of GPUs are almost exclusively exploited by the video game industry, where experts produce specialized content for game engines.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126364139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dynamic fur on mobile using textured offset surfaces 在移动设备上使用纹理偏移表面的动态皮毛
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787649
Shaohui Jiao, Xiaofeng Tong, Eric Li, Wenlong Li
Fur simulation is crucial in many graphic applications since it can greatly enhance the realistic visual effect of virtual objects, e.g. animal avatars. However, due to its high computational cost of massive fur strands processing and motion complexity, dynamic fur is regarded as a challenging task, especially on the mobile platforms with low computing power. In order to support real-time fur rendering in mobile applications, we propose a novel method called textured offset surfaces (TOS). In particular, the furry surface is represented by a set of offset surfaces, as shown in Figure 1(a). The offset surfaces are shifted outwards from the original mesh. Each offset surface is textured with scattering density (red rectangles in Figure 1(a)) to implicitly represent the fur geometry, whose value can be changed by texture warping to simulate the fur animation. In order to achieve high quality anisotropic illumination result, as shown in Figure 1(b), Kajiya/Banks lighting model is employed in the rendering phase.
皮毛模拟在许多图形应用中是至关重要的,因为它可以极大地增强虚拟对象的逼真视觉效果,例如动物化身。然而,由于大量皮毛处理的计算成本和运动复杂性,动态皮毛被认为是一项具有挑战性的任务,特别是在计算能力较低的移动平台上。为了支持移动应用中的实时渲染,我们提出了一种称为纹理偏移曲面(TOS)的新方法。其中,毛茸茸的表面由一组偏移面表示,如图1(a)所示。偏移曲面从原始网格向外移动。每个偏移表面都使用散射密度(图1(a)中的红色矩形)进行纹理处理,以隐式地表示毛皮几何形状,其值可以通过纹理翘曲来改变,以模拟毛皮动画。为了获得高质量的各向异性照明效果,如图1(b)所示,在渲染阶段采用Kajiya/Banks照明模型。
{"title":"Dynamic fur on mobile using textured offset surfaces","authors":"Shaohui Jiao, Xiaofeng Tong, Eric Li, Wenlong Li","doi":"10.1145/2787626.2787649","DOIUrl":"https://doi.org/10.1145/2787626.2787649","url":null,"abstract":"Fur simulation is crucial in many graphic applications since it can greatly enhance the realistic visual effect of virtual objects, e.g. animal avatars. However, due to its high computational cost of massive fur strands processing and motion complexity, dynamic fur is regarded as a challenging task, especially on the mobile platforms with low computing power. In order to support real-time fur rendering in mobile applications, we propose a novel method called textured offset surfaces (TOS). In particular, the furry surface is represented by a set of offset surfaces, as shown in Figure 1(a). The offset surfaces are shifted outwards from the original mesh. Each offset surface is textured with scattering density (red rectangles in Figure 1(a)) to implicitly represent the fur geometry, whose value can be changed by texture warping to simulate the fur animation. In order to achieve high quality anisotropic illumination result, as shown in Figure 1(b), Kajiya/Banks lighting model is employed in the rendering phase.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113946401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGMaker: example-based anime background image creation from a photograph BGMaker:基于实例的动画背景图像创建从一张照片
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787646
Shugo Yamaguchi, Chie Furusawa, Takuya Kato, Tsukasa Fukusato, S. Morishima
Anime designers often paint actual sceneries to serve as background images based on photographs to complement characters. As painting background scenery is time consuming and cost ineffective, there is a high demand for techniques that can convert photographs into anime styled graphics. Previous approaches for this purpose, such as Image Quilting [Efros and Freeman 2001] transferred a source texture onto a target photograph. These methods synthesized corresponding source patches with the target elements in a photograph, and correspondence was achieved through nearest-neighbor search such as PatchMatch [Barnes et al. 2009]. However, the nearest-neighbor patch is not always the most suitable patch for anime transfer because photographs and anime background images differ in color and texture. For example, real-world color need to be converted into specific colors for anime; further, the type of brushwork required to realize an anime effect, is different for different photograph elements (e.g. sky, mountain, grass). Thus, to get the most suitable patch, we propose a method, wherein we establish global region correspondence before local patch match. In our proposed method, BGMaker, (1) we divide the real and anime images into regions; (2) then, we automatically acquire correspondence between each region on the basis of color and texture features, and (3) search and synthesize the most suitable patch within the corresponding region. Our primary contribution in this paper is a method for automatically acquiring correspondence between target regions and source regions of different color and texture, which allows us to generate an anime background image while preserving the details of the source image.
动画设计师经常根据照片绘制真实的风景作为背景图像,以补充角色。由于绘制背景风景耗时且成本低,因此对可以将照片转换为动画风格图形的技术有很高的需求。先前用于此目的的方法,如图像绗缝[Efros和Freeman 2001]将源纹理转移到目标照片上。这些方法与照片中的目标元素合成相应的源补丁,并通过PatchMatch等最近邻搜索实现对应[Barnes et al. 2009]。然而,由于照片和动画背景图像在颜色和纹理上的不同,最近邻居的补丁并不总是最适合动画转移的补丁。例如,现实世界的颜色需要转换成动画的特定颜色;此外,实现动画效果所需的笔触类型对于不同的照片元素(例如天空,山,草)是不同的。因此,为了得到最合适的补丁,我们提出了一种方法,在局部补丁匹配之前先建立全局区域对应关系。在我们提出的BGMaker方法中,(1)我们将真实图像和动画图像划分为区域;(2)然后根据颜色和纹理特征自动获取各区域之间的对应关系;(3)在相应区域内搜索并合成最合适的patch。我们在本文中的主要贡献是一种自动获取不同颜色和纹理的目标区域和源区域之间对应关系的方法,使我们能够在保留源图像细节的同时生成动画背景图像。
{"title":"BGMaker: example-based anime background image creation from a photograph","authors":"Shugo Yamaguchi, Chie Furusawa, Takuya Kato, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2787626.2787646","DOIUrl":"https://doi.org/10.1145/2787626.2787646","url":null,"abstract":"Anime designers often paint actual sceneries to serve as background images based on photographs to complement characters. As painting background scenery is time consuming and cost ineffective, there is a high demand for techniques that can convert photographs into anime styled graphics. Previous approaches for this purpose, such as Image Quilting [Efros and Freeman 2001] transferred a source texture onto a target photograph. These methods synthesized corresponding source patches with the target elements in a photograph, and correspondence was achieved through nearest-neighbor search such as PatchMatch [Barnes et al. 2009]. However, the nearest-neighbor patch is not always the most suitable patch for anime transfer because photographs and anime background images differ in color and texture. For example, real-world color need to be converted into specific colors for anime; further, the type of brushwork required to realize an anime effect, is different for different photograph elements (e.g. sky, mountain, grass). Thus, to get the most suitable patch, we propose a method, wherein we establish global region correspondence before local patch match. In our proposed method, BGMaker, (1) we divide the real and anime images into regions; (2) then, we automatically acquire correspondence between each region on the basis of color and texture features, and (3) search and synthesize the most suitable patch within the corresponding region. Our primary contribution in this paper is a method for automatically acquiring correspondence between target regions and source regions of different color and texture, which allows us to generate an anime background image while preserving the details of the source image.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131280824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FlexAR: anatomy education through kinetic tangible augmented reality FlexAR:解剖学教育通过动态有形增强现实
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792629
M. Saenz, J. Strunk, Kelly Maset, J. Seo, E. Malone
We present FlexAR, a kinetic tangible augmented reality [Billinghurst,2008] application for anatomy education. Anatomy has been taught traditionally in two dimensions, particularly for those in non-medical fields such as artists. Medical students gain hands-on experience through cadaver dissection [[Winkelmann, 2007]. However, with dissection becoming less practical, researchers have begun evaluating techniques for teaching anatomy through technology.
我们提出FlexAR,一个动态的有形增强现实[Billinghurst,2008]应用于解剖学教育。传统上,解剖学的教学是二维的,尤其是对那些非医学领域的人,比如艺术家。医学生通过解剖尸体获得实践经验[[Winkelmann, 2007]。然而,随着解剖变得越来越不实用,研究人员已经开始评估通过技术来教授解剖学的技术。
{"title":"FlexAR: anatomy education through kinetic tangible augmented reality","authors":"M. Saenz, J. Strunk, Kelly Maset, J. Seo, E. Malone","doi":"10.1145/2787626.2792629","DOIUrl":"https://doi.org/10.1145/2787626.2792629","url":null,"abstract":"We present FlexAR, a kinetic tangible augmented reality [Billinghurst,2008] application for anatomy education. Anatomy has been taught traditionally in two dimensions, particularly for those in non-medical fields such as artists. Medical students gain hands-on experience through cadaver dissection [[Winkelmann, 2007]. However, with dissection becoming less practical, researchers have begun evaluating techniques for teaching anatomy through technology.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127685529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
ACM SIGGRAPH 2015 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1