首页 > 最新文献

Proceedings. Pacific Conference on Computer Graphics and Applications最新文献

英文 中文
Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications 用于瘦客户端游戏和VR应用的云辅助混合渲染
Pub Date : 2022-10-11 DOI: 10.2312/pg.20211389
Yuzao Tan, Louiz Kim-Chan, Anthony Halim, A. Bhojan
We introduce a novel distributed rendering approach to generate high-quality graphics in thin-client games and VR applications. Many mobile devices have limited computational power to achieve ray tracing in real-time. Hence, hardware-accelerated cloud servers can perform ray tracing instead and have their output streamed to clients in remote rendering. Applying the approach of distributed hybrid rendering, we leverage the computational capabilities of both the thin client and powerful server by performing rasterization locally while offloading ray tracing to the server. With advancements in 5G technology, the server and client can communicate effectively over the network and work together to produce a high-quality output while maintaining interactive frame rates. Our approach can achieve better visuals as compared to local rendering but faster performance as compared to remote rendering.
我们介绍了一种新的分布式渲染方法,用于在瘦客户端游戏和VR应用程序中生成高质量的图形。许多移动设备的计算能力有限,无法实现实时光线追踪。因此,硬件加速的云服务器可以执行光线追踪,并在远程渲染中将其输出流式传输给客户端。通过应用分布式混合渲染的方法,我们利用瘦客户机和强大服务器的计算能力,在本地执行栅格化,同时将光线跟踪卸载到服务器。随着5G技术的进步,服务器和客户端可以通过网络进行有效通信,并在保持交互帧速率的同时协同工作,产生高质量的输出。与本地渲染相比,我们的方法可以实现更好的视觉效果,但与远程渲染相比,我们的方法可以实现更快的性能。
{"title":"Cloud-Assisted Hybrid Rendering for Thin-Client Games and VR Applications","authors":"Yuzao Tan, Louiz Kim-Chan, Anthony Halim, A. Bhojan","doi":"10.2312/pg.20211389","DOIUrl":"https://doi.org/10.2312/pg.20211389","url":null,"abstract":"We introduce a novel distributed rendering approach to generate high-quality graphics in thin-client games and VR applications. Many mobile devices have limited computational power to achieve ray tracing in real-time. Hence, hardware-accelerated cloud servers can perform ray tracing instead and have their output streamed to clients in remote rendering. Applying the approach of distributed hybrid rendering, we leverage the computational capabilities of both the thin client and powerful server by performing rasterization locally while offloading ray tracing to the server. With advancements in 5G technology, the server and client can communicate effectively over the network and work together to produce a high-quality output while maintaining interactive frame rates. Our approach can achieve better visuals as compared to local rendering but faster performance as compared to remote rendering.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"66 1","pages":"61-62"},"PeriodicalIF":0.0,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87506021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Shadow Removal via Cascade Large Mask Inpainting 阴影去除通过级联大面具油漆
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221246
Juwan Kim, Seung-Heon Kim, I. Jang
We present a novel shadow removal framework based on the image inpainting approach. The proposed method consists of two cascade Large-Mask inpainting(LaMa) networks for shadow inpainting and edge inpainting. Experiments with the ISTD and adjusted ISTD dataset show that our method achieves competitive shadow removal results compared to state-of-the methods. And we also show that shadows are well removed from complex and large shadow images, such as urban aerial images
提出了一种基于图像补色方法的阴影去除框架。该方法由两个级联的大掩模网络组成,分别用于阴影和边缘的绘制。在ISTD和调整后的ISTD数据集上进行的实验表明,与现有方法相比,我们的方法取得了具有竞争力的阴影去除效果。我们还表明,阴影可以很好地从复杂的大阴影图像中去除,比如城市航空图像
{"title":"Shadow Removal via Cascade Large Mask Inpainting","authors":"Juwan Kim, Seung-Heon Kim, I. Jang","doi":"10.2312/pg.20221246","DOIUrl":"https://doi.org/10.2312/pg.20221246","url":null,"abstract":"We present a novel shadow removal framework based on the image inpainting approach. The proposed method consists of two cascade Large-Mask inpainting(LaMa) networks for shadow inpainting and edge inpainting. Experiments with the ISTD and adjusted ISTD dataset show that our method achieves competitive shadow removal results compared to state-of-the methods. And we also show that shadows are well removed from complex and large shadow images, such as urban aerial images","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"32 1","pages":"49-50"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90816584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology DFGA:利用现代深度学习技术从RGB视频中生成数字人脸和动画
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221249
Diqiong Jiang, Li You, Jian Chang, Ruofeng Tong
High-quality and personalized digital human faces have been widely used in media and entertainment, from film and game production to virtual reality. However, the existing technology of generating digital faces requires extremely intensive labor, which prevents the large-scale popularization of digital face technology. In order to tackle this problem, the proposed research will investigate deep learning-based facial modeling and animation technologies to 1) create personalized face geometry from a single image, including the recognizable neutral face shape and believable personalized blendshapes; (2) generate personalized production-level facial skin textures from a video or image sequence; (3) automatically drive and animate a 3D target avatar by an actor’s 2D facial video or audio. Our innovation is to achieve these tasks both efficiently and precisely by using the end-to-end framework with modern deep learning technology (StyleGAN, Transformer, NeRF).
高质量和个性化的数字人脸已广泛应用于媒体和娱乐,从电影和游戏制作到虚拟现实。然而,现有的数字人脸生成技术需要极其密集的劳动,这阻碍了数字人脸技术的大规模普及。为了解决这一问题,本研究将研究基于深度学习的面部建模和动画技术,以1)从单个图像中创建个性化的面部几何形状,包括可识别的中性面部形状和可信的个性化混合形状;(2)从视频或图像序列中生成个性化的生产级面部皮肤纹理;(3)通过演员的2D面部视频或音频自动驱动和动画3D目标化身。我们的创新是通过使用端到端框架和现代深度学习技术(StyleGAN, Transformer, NeRF)高效而精确地实现这些任务。
{"title":"DFGA: Digital Human Faces Generation and Animation from the RGB Video using Modern Deep Learning Technology","authors":"Diqiong Jiang, Li You, Jian Chang, Ruofeng Tong","doi":"10.2312/pg.20221249","DOIUrl":"https://doi.org/10.2312/pg.20221249","url":null,"abstract":"High-quality and personalized digital human faces have been widely used in media and entertainment, from film and game production to virtual reality. However, the existing technology of generating digital faces requires extremely intensive labor, which prevents the large-scale popularization of digital face technology. In order to tackle this problem, the proposed research will investigate deep learning-based facial modeling and animation technologies to 1) create personalized face geometry from a single image, including the recognizable neutral face shape and believable personalized blendshapes; (2) generate personalized production-level facial skin textures from a video or image sequence; (3) automatically drive and animate a 3D target avatar by an actor’s 2D facial video or audio. Our innovation is to achieve these tasks both efficiently and precisely by using the end-to-end framework with modern deep learning technology (StyleGAN, Transformer, NeRF).","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"55 1","pages":"63-64"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78904127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism 基于全局参考机制的场景草图多实例参考图像分割
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221238
Pengyang Ling, Haoran Mo, Chengying Gao
Scene sketch segmentation based on referring expression plays an important role in sketch editing of anime industry. While most existing referring image segmentation approaches are designed for the standard task of generating a binary segmentation mask for a single or a group of target(s), we think it necessary to equip these models with the ability of multi-instance segmentation. To this end, we propose GRM-Net, a one-stage framework tailored for multi-instance referring image segmentation of scene sketches. We extract the language features from the expression and fuse it into a conventional instance segmentation pipeline for filtering out the undesired instances in a coarse-to-fine manner and keeping the matched ones. To model the relative arrangement of the objects and the relationship among them from a global view, we propose a global reference mechanism (GRM) to assign references to each detected candidate to identify its position. We compare with existing methods designed for multi-instance referring image segmentation of scene sketches and for the standard task of referring image segmentation, and the results demonstrate the effectiveness and superiority of our approach.
基于参考表达的场景小品分割在动漫小品编辑中占有重要地位。虽然大多数现有的参考图像分割方法都是为单个或一组目标生成二值分割掩码的标准任务而设计的,但我们认为有必要为这些模型配备多实例分割的能力。为此,我们提出了GRM-Net,这是一个为场景草图的多实例参考图像分割量身定制的单阶段框架。我们从表达式中提取语言特征,并将其融合到传统的实例分割管道中,以粗到细的方式过滤掉不需要的实例,并保留匹配的实例。为了从全局角度对目标的相对排列和相互之间的关系进行建模,我们提出了一种全局引用机制(GRM),为每个检测到的候选对象分配引用以确定其位置。通过与已有的场景草图多实例参考图像分割方法和标准参考图像分割方法的比较,验证了本文方法的有效性和优越性。
{"title":"Multi-instance Referring Image Segmentation of Scene Sketches based on Global Reference Mechanism","authors":"Pengyang Ling, Haoran Mo, Chengying Gao","doi":"10.2312/pg.20221238","DOIUrl":"https://doi.org/10.2312/pg.20221238","url":null,"abstract":"Scene sketch segmentation based on referring expression plays an important role in sketch editing of anime industry. While most existing referring image segmentation approaches are designed for the standard task of generating a binary segmentation mask for a single or a group of target(s), we think it necessary to equip these models with the ability of multi-instance segmentation. To this end, we propose GRM-Net, a one-stage framework tailored for multi-instance referring image segmentation of scene sketches. We extract the language features from the expression and fuse it into a conventional instance segmentation pipeline for filtering out the undesired instances in a coarse-to-fine manner and keeping the matched ones. To model the relative arrangement of the objects and the relationship among them from a global view, we propose a global reference mechanism (GRM) to assign references to each detected candidate to identify its position. We compare with existing methods designed for multi-instance referring image segmentation of scene sketches and for the standard task of referring image segmentation, and the results demonstrate the effectiveness and superiority of our approach.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"1 1","pages":"7-12"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83026370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Face Modeling based on Deep Learning through Line-drawing 基于线条绘制的深度学习人脸建模
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221239
Bin Deng, Y. Kawanaka, S. Sato, K. Sakurai, Shang Gao, Z. Tang
This paper presents a deep learning-based method for creating 3D human face models. In recent years, several sketch-based shape modeling methods have been proposed. These methods allow the user to easily model various shapes containing animal, building, vehicle, and so on. However, a few methods have been proposed for human face models. If we can create 3D human face models via line-drawing, models of cartoon or fantasy characters can be easily created. To achieve this, we propose a sketch-based face modeling method. When a single line-drawing image is input to our system, a corresponding 3D face model are generated. Our system is based on a deep learning; many human face models and corresponding images rendered as line-drawing are prepared, and then a network is trained using these datasets. For the network, we use a previous method for reconstructing human bodies from real images, and we propose some extensions to enhance learning accuracy. Several examples are shown to demonstrate usefulness of our system.
本文提出了一种基于深度学习的三维人脸模型创建方法。近年来,人们提出了几种基于草图的形状建模方法。这些方法允许用户轻松地建模各种形状,包括动物、建筑、车辆等。然而,已经提出了一些人脸模型的方法。如果我们可以通过线条绘制创建3D人脸模型,卡通或幻想人物的模型可以很容易地创建。为了实现这一目标,我们提出了一种基于草图的人脸建模方法。当一个单线绘制图像输入到我们的系统,一个相应的三维人脸模型生成。我们的系统是基于深度学习的;准备了大量的人脸模型和相应的线条绘制图像,然后使用这些数据集训练网络。对于网络,我们使用先前的方法从真实图像中重建人体,并提出了一些扩展以提高学习精度。几个例子显示了我们的系统的实用性。
{"title":"Human Face Modeling based on Deep Learning through Line-drawing","authors":"Bin Deng, Y. Kawanaka, S. Sato, K. Sakurai, Shang Gao, Z. Tang","doi":"10.2312/pg.20221239","DOIUrl":"https://doi.org/10.2312/pg.20221239","url":null,"abstract":"This paper presents a deep learning-based method for creating 3D human face models. In recent years, several sketch-based shape modeling methods have been proposed. These methods allow the user to easily model various shapes containing animal, building, vehicle, and so on. However, a few methods have been proposed for human face models. If we can create 3D human face models via line-drawing, models of cartoon or fantasy characters can be easily created. To achieve this, we propose a sketch-based face modeling method. When a single line-drawing image is input to our system, a corresponding 3D face model are generated. Our system is based on a deep learning; many human face models and corresponding images rendered as line-drawing are prepared, and then a network is trained using these datasets. For the network, we use a previous method for reconstructing human bodies from real images, and we propose some extensions to enhance learning accuracy. Several examples are shown to demonstrate usefulness of our system.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"13 1","pages":"13-14"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82457820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers 从光线追踪器的记忆轨迹重建边界体层次结构
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221243
Max von Bülow, Tobias Stensbeck, V. Knauthe, S. Guthe, D. Fellner
The ongoing race to improve computer graphics leads to more complex GPU hardware and ray tracing techniques whose internal functionality is sometimes hidden to the user. Bounding volume hierarchies and their construction are an important performance aspect of such ray tracing implementations. We propose a novel approach that utilizes binary instrumentation to collect memory traces and then uses them to extract the bounding volume hierarchy (BVH) by analyzing access patters. Our reconstruction allows combining memory traces captured from multiple ray tracing views independently, increasing the reconstruction result. It reaches accuracies of 30% to 45% when comparing against the ground-truth BVH used for ray tracing a single view on a simple scene with one object. With multiple views it is even possible to reconstruct the whole BVH, while we already achieve 98% with just seven views. Because our approach is largely independent of the data structures used in-ternally, these accurate reconstructions serve as a first step into estimation of unknown construction techniques of ray tracing implementations.
不断改进计算机图形的竞赛导致了更复杂的GPU硬件和光线追踪技术,其内部功能有时对用户是隐藏的。边界体层次结构及其构造是此类光线跟踪实现的重要性能方面。我们提出了一种新的方法,利用二进制检测来收集内存轨迹,然后通过分析访问模式来提取边界卷层次结构(BVH)。我们的重建允许将从多个光线追踪视图捕获的记忆轨迹独立地组合在一起,从而提高重建结果。与用于单一物体的简单场景单视图光线追踪的ground-truth BVH相比,它的精度达到30%至45%。通过多个视图甚至可以重建整个BVH,而我们已经通过7个视图实现了98%的重建。由于我们的方法在很大程度上独立于内部使用的数据结构,因此这些精确的重建可以作为估计光线追踪实现的未知构建技术的第一步。
{"title":"Reconstructing Bounding Volume Hierarchies from Memory Traces of Ray Tracers","authors":"Max von Bülow, Tobias Stensbeck, V. Knauthe, S. Guthe, D. Fellner","doi":"10.2312/pg.20221243","DOIUrl":"https://doi.org/10.2312/pg.20221243","url":null,"abstract":"The ongoing race to improve computer graphics leads to more complex GPU hardware and ray tracing techniques whose internal functionality is sometimes hidden to the user. Bounding volume hierarchies and their construction are an important performance aspect of such ray tracing implementations. We propose a novel approach that utilizes binary instrumentation to collect memory traces and then uses them to extract the bounding volume hierarchy (BVH) by analyzing access patters. Our reconstruction allows combining memory traces captured from multiple ray tracing views independently, increasing the reconstruction result. It reaches accuracies of 30% to 45% when comparing against the ground-truth BVH used for ray tracing a single view on a simple scene with one object. With multiple views it is even possible to reconstruct the whole BVH, while we already achieve 98% with just seven views. Because our approach is largely independent of the data structures used in-ternally, these accurate reconstructions serve as a first step into estimation of unknown construction techniques of ray tracing implementations.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"60 1","pages":"29-34"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81734255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intersection Distance Field Collision for GPU 交叉距离场碰撞GPU
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221242
Bastian Krayer, Rebekka Görge, Stefan Müller
We present a framework for finding collision points between objects represented by signed distance fields. Particles are used to sample the region where intersections can occur. The distance field representation is used to project the particles onto the surface of the intersection of both objects. From there information, such as collision normals and intersection depth can be extracted. This allows for handling various types of objects in a unified way. Due to the particle approach, the algorithm is well suited to the GPU.
我们提出了一个框架,用于寻找由符号距离域表示的对象之间的碰撞点。粒子被用来对可能发生交集的区域进行采样。距离场表示用于将粒子投影到两个物体相交的表面上。从那里可以提取碰撞法线和相交深度等信息。这允许以统一的方式处理各种类型的对象。由于采用粒子方法,该算法非常适合GPU。
{"title":"Intersection Distance Field Collision for GPU","authors":"Bastian Krayer, Rebekka Görge, Stefan Müller","doi":"10.2312/pg.20221242","DOIUrl":"https://doi.org/10.2312/pg.20221242","url":null,"abstract":"We present a framework for finding collision points between objects represented by signed distance fields. Particles are used to sample the region where intersections can occur. The distance field representation is used to project the particles onto the surface of the intersection of both objects. From there information, such as collision normals and intersection depth can be extracted. This allows for handling various types of objects in a unified way. Due to the particle approach, the algorithm is well suited to the GPU.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"91 1","pages":"23-28"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88279658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering 滚动制导图像滤波的自适应和动态正则化
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221245
M. Fukatsu, S. Yoshizawa, H. Takemura, H. Yokota
Separating shapes and textures of digital images at different scales is useful in computer graphics. The Rolling Guidance (RG) filter, which removes structures smaller than a specified scale while preserving salient edges, has attracted considerable atten-tion. Conventional RG-based filters have some drawbacks, including smoothness/sharpness quality dependence on scale and non-uniform convergence. This paper proposes a novel RG-based image filter that has more stable filtering quality at varying scales. Our filtering approach is an adaptive and dynamic regularization for a recursive regression model in the RG framework to produce more edge saliency and appropriate scale convergence. Our numerical experiments demonstrated filtering results with uniform convergence and high accuracy for varying scales.
在计算机图形学中,在不同尺度上分离数字图像的形状和纹理是很有用的。滚动制导(RG)滤波器在保留显著边缘的同时去除小于指定尺度的结构,引起了相当大的关注。传统的基于rg的滤波器存在一些缺点,包括平滑/锐度质量依赖于尺度和非均匀收敛。本文提出了一种新的基于rg的图像滤波器,该滤波器在不同尺度下具有更稳定的滤波质量。我们的滤波方法是RG框架中递归回归模型的自适应动态正则化,以产生更多的边缘显著性和适当的规模收敛。数值实验表明,滤波结果在不同尺度下收敛均匀,精度高。
{"title":"Adaptive and Dynamic Regularization for Rolling Guidance Image Filtering","authors":"M. Fukatsu, S. Yoshizawa, H. Takemura, H. Yokota","doi":"10.2312/pg.20221245","DOIUrl":"https://doi.org/10.2312/pg.20221245","url":null,"abstract":"Separating shapes and textures of digital images at different scales is useful in computer graphics. The Rolling Guidance (RG) filter, which removes structures smaller than a specified scale while preserving salient edges, has attracted considerable atten-tion. Conventional RG-based filters have some drawbacks, including smoothness/sharpness quality dependence on scale and non-uniform convergence. This paper proposes a novel RG-based image filter that has more stable filtering quality at varying scales. Our filtering approach is an adaptive and dynamic regularization for a recursive regression model in the RG framework to produce more edge saliency and appropriate scale convergence. Our numerical experiments demonstrated filtering results with uniform convergence and high accuracy for varying scales.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"323 1","pages":"43-48"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86771391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models 从动画3D模型学习交互式线条合成的风格空间
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221237
Zeyu Wang, Tuanfeng Y. Wang, Julie Dorsey
Most non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.
大多数用于线条绘制合成的非真实感渲染(NPR)方法都是在静态形状上操作的。它们不适合处理动画3D模型,因为需要广泛的每帧参数调整来实现预期的外观和自然过渡。本文介绍了一种基于学习风格空间的交互式三维动画模型线条合成框架,用于绘图表示和插值。我们把样式称为线条中笔画的位置与其相应的几何属性之间的关系。从动画3D角色的给定序列开始,用户为一组关键帧创建绘图。我们的系统将栅格图从底层几何图形中解脱出来后嵌入到潜在的样式空间中。通过遍历潜在空间,我们的系统实现了输入关键帧之间的平滑过渡。用户还可以交互式地编辑、添加或删除关键帧,类似于典型的基于关键帧的工作流。我们使用深度神经网络来实现我们的系统,这些神经网络是通过组合NPR方法生成的合成线图进行训练的。我们的绘图特定监督和基于优化的嵌入机制允许在运行期间从NPR线条图到用户创建的图进行泛化。实验表明,我们的方法可以生成高质量的线条动画,同时允许跨帧的绘图样式交互控制。
{"title":"Learning a Style Space for Interactive Line Drawing Synthesis from Animated 3D Models","authors":"Zeyu Wang, Tuanfeng Y. Wang, Julie Dorsey","doi":"10.2312/pg.20221237","DOIUrl":"https://doi.org/10.2312/pg.20221237","url":null,"abstract":"Most non-photorealistic rendering (NPR) methods for line drawing synthesis operate on a static shape. They are not tailored to process animated 3D models due to extensive per-frame parameter tuning needed to achieve the intended look and natural transition. This paper introduces a framework for interactive line drawing synthesis from animated 3D models based on a learned style space for drawing representation and interpolation. We refer to style as the relationship between stroke placement in a line drawing and its corresponding geometric properties. Starting from a given sequence of an animated 3D character, a user creates drawings for a set of keyframes. Our system embeds the raster drawings into a latent style space after they are disentangled from the underlying geometry. By traversing the latent space, our system enables a smooth transition between the input keyframes. The user may also edit, add, or remove the keyframes interactively, similar to a typical keyframe-based workflow. We implement our system with deep neural networks trained on synthetic line drawings produced by a combination of NPR methods. Our drawing-specific supervision and optimization-based embedding mechanism allow generalization from NPR line drawings to user-created drawings during run time. Experiments show that our approach generates high-quality line drawing animations while allowing interactive control of the drawing style across frames.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"117 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79373858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Deformable Image Registration with Dual Cursor 交互式可变形图像配准与双光标
Pub Date : 2022-01-01 DOI: 10.2312/pg.20221241
Bin Deng, T. Igarashi, Tsukasa Koike, Taichi Kin
Deformable image registration is the process of deforming a target image to match corresponding features of a reference image. Fully automatic registration remains difficult; thus, manual registration is dominant in practice. In manual registration, an expert user specifies a set of paired landmarks on the two images; subsequently, the system deforms the target image to match each landmark with its counterpart as a batch process. However, the deformation results are difficult for the user to predict, and moving the cursor back and forth between the two images is time-consuming. To improve the efficiency of this manual process, we propose an interactive method wherein the deformation results are continuously displayed as the user clicks and drags each landmark. Additionally, the system displays two cursors, one on the target image and the other on the reference image, to reduce the amount of mouse movement required. The results of a user study reveal that the proposed interactive method achieves higher accuracy and faster task completion compared to traditional batch landmark placement.
可变形图像配准是对目标图像进行变形以匹配参考图像的相应特征的过程。全自动注册仍然很困难;因此,手工注册在实践中占主导地位。在手动配准中,专家用户在两个图像上指定一组成对的地标;随后,系统对目标图像进行变形,以批处理的方式将每个地标与其对应的地标进行匹配。然而,用户很难预测变形结果,并且在两幅图像之间来回移动光标非常耗时。为了提高这一手工过程的效率,我们提出了一种交互式方法,当用户点击和拖动每个地标时,变形结果会连续显示。此外,系统显示两个光标,一个在目标图像上,另一个在参考图像上,以减少所需的鼠标移动量。用户研究结果表明,与传统的批量地标放置方法相比,该方法具有更高的精度和更快的任务完成速度。
{"title":"Interactive Deformable Image Registration with Dual Cursor","authors":"Bin Deng, T. Igarashi, Tsukasa Koike, Taichi Kin","doi":"10.2312/pg.20221241","DOIUrl":"https://doi.org/10.2312/pg.20221241","url":null,"abstract":"Deformable image registration is the process of deforming a target image to match corresponding features of a reference image. Fully automatic registration remains difficult; thus, manual registration is dominant in practice. In manual registration, an expert user specifies a set of paired landmarks on the two images; subsequently, the system deforms the target image to match each landmark with its counterpart as a batch process. However, the deformation results are difficult for the user to predict, and moving the cursor back and forth between the two images is time-consuming. To improve the efficiency of this manual process, we propose an interactive method wherein the deformation results are continuously displayed as the user clicks and drags each landmark. Additionally, the system displays two cursors, one on the target image and the other on the reference image, to reduce the amount of mouse movement required. The results of a user study reveal that the proposed interactive method achieves higher accuracy and faster task completion compared to traditional batch landmark placement.","PeriodicalId":88304,"journal":{"name":"Proceedings. Pacific Conference on Computer Graphics and Applications","volume":"2 1","pages":"17-21"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. Pacific Conference on Computer Graphics and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1