Pub Date : 2024-02-02DOI: 10.1016/j.gmod.2024.101213
Flávio Coutinho , Luiz Chaimowicz
Asset creation in game development usually requires multiple iterations until a final version is achieved. This iterative process becomes more significant when the content is pixel art, in which the artist carefully places each pixel. We hypothesize that the problem of generating character sprites in a target pose (e.g., facing right) given a source (e.g., facing front) can be framed as an image-to-image translation task. Then, we present an architecture of deep generative models that takes as input an image of a character in one domain (pose) and transfers it to another. We approach the problem using generative adversarial networks (GANs) and build on Pix2Pix’s architecture while leveraging some specific characteristics of the pixel art style. We evaluated the trained models using four small datasets (less than 1k) and a more extensive and diverse one (12k). The models yielded promising results, and their generalization capacity varies according to the dataset size and variability. After training models to generate images among four domains (i.e., front, right, back, left), we present an early version of a mixed-initiative sprite editor that allows users to interact with them and iterate in creating character sprites.
{"title":"Pixel art character generation as an image-to-image translation problem using GANs","authors":"Flávio Coutinho , Luiz Chaimowicz","doi":"10.1016/j.gmod.2024.101213","DOIUrl":"10.1016/j.gmod.2024.101213","url":null,"abstract":"<div><p>Asset creation in game development usually requires multiple iterations until a final version is achieved. This iterative process becomes more significant when the content is pixel art, in which the artist carefully places each pixel. We hypothesize that the problem of generating character sprites in a target pose (e.g., facing right) given a source (e.g., facing front) can be framed as an image-to-image translation task. Then, we present an architecture of deep generative models that takes as input an image of a character in one domain (pose) and transfers it to another. We approach the problem using generative adversarial networks (GANs) and build on Pix2Pix’s architecture while leveraging some specific characteristics of the pixel art style. We evaluated the trained models using four small datasets (less than 1k) and a more extensive and diverse one (12k). The models yielded promising results, and their generalization capacity varies according to the dataset size and variability. After training models to generate images among four domains (i.e., front, right, back, left), we present an early version of a mixed-initiative sprite editor that allows users to interact with them and iterate in creating character sprites.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"132 ","pages":"Article 101213"},"PeriodicalIF":1.7,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000018/pdfft?md5=d7948e383c160b41fc886121e68e438f&pid=1-s2.0-S1524070324000018-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139661295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-03DOI: 10.1016/j.gmod.2023.101212
Gabriel Fonseca Silva, Paulo Ricardo Knob, Rubens Halbig Montanha, Soraia Raupp Musse
Crowd simulation is a research area widely used in diverse fields, including gaming and security, assessing virtual agent movements through metrics like time to reach their goals, speed, trajectories, and densities. This is relevant for security applications, for instance, as different crowd configurations can determine the time people spend in environments trying to evacuate them. In this work, we extend WebCrowds, an authoring tool for crowd simulation, to allow users to build scenarios and evaluate them through a set of metrics. The aim is to provide a quantitative metric that can, based on simulation data, select the best crowd configuration in a certain environment. We conduct experiments to validate our proposed metric in multiple crowd simulation scenarios and perform a comparison with another metric found in the literature. The results show that experts in the domain of crowd scenarios agree with our proposed quantitative metric.
{"title":"Evaluating and comparing crowd simulations: Perspectives from a crowd authoring tool","authors":"Gabriel Fonseca Silva, Paulo Ricardo Knob, Rubens Halbig Montanha, Soraia Raupp Musse","doi":"10.1016/j.gmod.2023.101212","DOIUrl":"10.1016/j.gmod.2023.101212","url":null,"abstract":"<div><p>Crowd simulation is a research area widely used in diverse fields, including gaming and security, assessing virtual agent movements through metrics like time to reach their goals, speed, trajectories, and densities. This is relevant for security applications, for instance, as different crowd configurations can determine the time people spend in environments trying to evacuate them. In this work, we extend WebCrowds, an authoring tool for crowd simulation, to allow users to build scenarios and evaluate them through a set of metrics. The aim is to provide a quantitative metric that can, based on simulation data, select the best crowd configuration in a certain environment. We conduct experiments to validate our proposed metric in multiple crowd simulation scenarios and perform a comparison with another metric found in the literature. The results show that experts in the domain of crowd scenarios agree with our proposed quantitative metric.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"131 ","pages":"Article 101212"},"PeriodicalIF":1.7,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000425/pdfft?md5=99cc8b127e117c8937d599aa1f5ebafe&pid=1-s2.0-S1524070323000425-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139084586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial special issue on the 9th smart tools and applications in graphics conference (STAG 2022)","authors":"Daniela Cabiddu , Gianmarco Cherchi , Teseo Schneider","doi":"10.1016/j.gmod.2023.101203","DOIUrl":"10.1016/j.gmod.2023.101203","url":null,"abstract":"","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101203"},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000334/pdfft?md5=5e8e5ee6713dd442b9a08e76744aae09&pid=1-s2.0-S1524070323000334-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135638180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.gmod.2023.101209
Aleksandar Dimitrijević, Dejan Rančić
This paper presents performance improvements for Ellipsoid Clipmaps, an out-of-core planet-sized geodetically accurate terrain rendering algorithm. The performance improvements were achieved by eliminating unnecessarily dense levels, more accurate block culling in the geographic coordinate system, and more efficient rendering methods. The elimination of unnecessarily dense levels is the result of analyzing and determining the optimal relative height of the viewer with respect to the most detailed level, resulting in the most consistent size of triangles across all visible levels. The proposed method for estimating the visibility of blocks based on view orientation allows rapid block-level view frustum culling performed in data space before visualization and spatial transformation of blocks. The use of a modern geometry pipeline through task and mesh shaders forced the handling of extremely fine granularity of blocks, but also shifted a significant part of the block culling process from CPU to the GPU. The approach described achieves high throughput and enables geodetically accurate rendering of the terrain based on the WGS 84 reference ellipsoid at very high resolution and in real time, with tens of millions of triangles with an average area of about 0.5 pix on a 1080p screen on mid-range graphics cards.
{"title":"High-performance Ellipsoidal Clipmaps","authors":"Aleksandar Dimitrijević, Dejan Rančić","doi":"10.1016/j.gmod.2023.101209","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101209","url":null,"abstract":"<div><p>This paper presents performance improvements for Ellipsoid Clipmaps, an out-of-core planet-sized geodetically accurate terrain rendering algorithm. The performance improvements were achieved by eliminating unnecessarily dense levels, more accurate block culling in the geographic coordinate system, and more efficient rendering methods. The elimination of unnecessarily dense levels is the result of analyzing and determining the optimal relative height of the viewer with respect to the most detailed level, resulting in the most consistent size of triangles across all visible levels. The proposed method for estimating the visibility of blocks based on view orientation allows rapid block-level view frustum culling performed in data space before visualization and spatial transformation of blocks. The use of a modern geometry pipeline through task and mesh shaders forced the handling of extremely fine granularity of blocks, but also shifted a significant part of the block culling process from CPU to the GPU. The approach described achieves high throughput and enables geodetically accurate rendering of the terrain based on the WGS 84 reference ellipsoid at very high resolution and in real time, with tens of millions of triangles with an average area of about 0.5 pix<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span> on a 1080p screen on mid-range graphics cards.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101209"},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000395/pdfft?md5=26122c390b83d408f64d205c80bb4675&pid=1-s2.0-S1524070323000395-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138466486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1016/j.gmod.2023.101210
Yu-Wei Zhang , Hongguang Yang , Ping Luo , Zhi Li , Hui Liu , Zhongping Ji , Caiming Zhang
This paper aims at extending the method of Zhang et al. (2023) to produce not only portrait bas-reliefs from single photographs, but also high-depth reliefs with reasonable depth ordering. We cast this task as a problem of style-aware photo-to-depth translation, where the input is a photograph conditioned by a style vector and the output is a portrait relief with desired depth style. To construct ground-truth data for network training, we first propose an optimization-based method to synthesize high-depth reliefs from 3D portraits. Then, we train a normal-to-depth network to learn the mapping from normal maps to relief depths. After that, we use the trained network to generate high-depth relief samples using the provided normal maps from Zhang et al. (2023). As each normal map has pixel-wise photograph, we are able to establish correspondences between photographs and high-depth reliefs. By taking the bas-reliefs of Zhang et al. (2023), the new high-depth reliefs and their mixtures as target ground-truths, we finally train a encoder-to-decoder network to achieve style-aware relief modeling. Specially, the network is based on a U-shaped architecture, consisting of Swin Transformer blocks to process hierarchical deep features. Extensive experiments have demonstrated the effectiveness of the proposed method. Comparisons with previous works have verified its flexibility and state-of-the-art performance.
本文旨在扩展Zhang et al.(2023)的方法,不仅可以从单张照片中生成人像浅浮雕,还可以生成深度排序合理的高深度浮雕。我们将此任务作为样式感知照片到深度转换的问题,其中输入是由样式向量限定的照片,输出是具有所需深度样式的人像浮雕。为了构建用于网络训练的真地数据,我们首先提出了一种基于优化的方法,从三维人像中合成高深度浮雕。然后,我们训练一个法线到深度的网络来学习从法线到地形深度的映射。之后,我们使用训练好的网络,使用Zhang等人(2023)提供的法线图生成高深度地形样本。由于每个法线贴图都有像素级的照片,我们能够在照片和高深度浮雕之间建立对应关系。通过将Zhang等人(2023)的浅浮雕、新的高深度浮雕及其混合物作为目标ground-truth,我们最终训练了一个编码器到解码器网络,以实现风格感知浮雕建模。特别地,该网络基于由Swin Transformer块组成的u型结构来处理分层深度特征。大量的实验证明了该方法的有效性。与以前的作品比较,验证了它的灵活性和最先进的性能。
{"title":"Modeling multi-style portrait relief from a single photograph","authors":"Yu-Wei Zhang , Hongguang Yang , Ping Luo , Zhi Li , Hui Liu , Zhongping Ji , Caiming Zhang","doi":"10.1016/j.gmod.2023.101210","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101210","url":null,"abstract":"<div><p>This paper aims at extending the method of Zhang et al. (2023) to produce not only portrait bas-reliefs from single photographs, but also high-depth reliefs with reasonable depth ordering. We cast this task as a problem of style-aware photo-to-depth translation, where the input is a photograph conditioned by a style vector and the output is a portrait relief with desired depth style. To construct ground-truth data for network training, we first propose an optimization-based method to synthesize high-depth reliefs from 3D portraits. Then, we train a normal-to-depth network to learn the mapping from normal maps to relief depths. After that, we use the trained network to generate high-depth relief samples using the provided normal maps from Zhang et al. (2023). As each normal map has pixel-wise photograph, we are able to establish correspondences between photographs and high-depth reliefs. By taking the bas-reliefs of Zhang et al. (2023), the new high-depth reliefs and their mixtures as target ground-truths, we finally train a encoder-to-decoder network to achieve style-aware relief modeling. Specially, the network is based on a U-shaped architecture, consisting of Swin Transformer blocks to process hierarchical deep features. Extensive experiments have demonstrated the effectiveness of the proposed method. Comparisons with previous works have verified its flexibility and state-of-the-art performance.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101210"},"PeriodicalIF":1.7,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000401/pdfft?md5=de53c7cacd318b65effd57ea40c70f18&pid=1-s2.0-S1524070323000401-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138454034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-08DOI: 10.1016/j.gmod.2023.101208
Jan Martens, Jörg Blankenbach
Modern laser scanners, depth sensor devices and Dense Image Matching techniques allow for capturing of extensive point cloud datasets. While capturing has become more user-friendly, the size of registered point clouds results in large datasets which pose challenges for processing, storage and visualization. This paper presents a decomposition scheme using oriented KD trees and the wavelet transform for unordered point clouds. Taking inspiration from image pyramids, the decomposition scheme comes with a Level of Detail representation where higher-levels are progressively reconstructed from lower ones, thus making it suitable for streaming and continuous Level of Detail. Furthermore, the decomposed representation allows common compression techniques to achieve higher compression ratios by modifying the underlying frequency data at the cost of geometric accuracy and therefore allows for flexible lossy compression. After introducing this novel decomposition scheme, results are discussed to show how it deals with data captured from different sources.
{"title":"A decomposition scheme for continuous Level of Detail, streaming and lossy compression of unordered point clouds","authors":"Jan Martens, Jörg Blankenbach","doi":"10.1016/j.gmod.2023.101208","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101208","url":null,"abstract":"<div><p>Modern laser scanners, depth sensor devices and Dense Image Matching techniques allow for capturing of extensive point cloud datasets. While capturing has become more user-friendly, the size of registered point clouds results in large datasets which pose challenges for processing, storage and visualization. This paper presents a decomposition scheme using oriented KD trees and the wavelet transform for unordered point clouds. Taking inspiration from image pyramids, the decomposition scheme comes with a Level of Detail representation where higher-levels are progressively reconstructed from lower ones, thus making it suitable for streaming and continuous Level of Detail. Furthermore, the decomposed representation allows common compression techniques to achieve higher compression ratios by modifying the underlying frequency data at the cost of geometric accuracy and therefore allows for flexible lossy compression. After introducing this novel decomposition scheme, results are discussed to show how it deals with data captured from different sources.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101208"},"PeriodicalIF":1.7,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000383/pdfft?md5=acb2ab838184d4b7e97e6052e64a6ea6&pid=1-s2.0-S1524070323000383-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92047097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reconstructing 3D human pose and body shape from monocular images or videos is a fundamental task for comprehending human dynamics. Frame-based methods can be broadly categorized into two fashions: those regressing parametric model parameters (e.g., SMPL) and those exploring alternative representations (e.g., volumetric shapes, 3D coordinates). Non-parametric representations have demonstrated superior performance due to their enhanced flexibility. However, when applied to video data, these non-parametric frame-based methods tend to generate inconsistent and unsmooth results. To this end, we present a novel approach that directly regresses the 3D coordinates of the mesh vertices and body joints with a spatial–temporal Transformer. In our method, we introduce a SpatioTemporal Learning Block (STLB) with Spatial Learning Module (SLM) and Temporal Learning Module (TLM), which leverages spatial and temporal information to model interactions at a finer granularity, specifically at the body token level. Our method outperforms previous state-of-the-art approaches on Human3.6M and 3DPW benchmark datasets.
{"title":"Vertex position estimation with spatial–temporal transformer for 3D human reconstruction","authors":"Xiangjun Zhang, Yinglin Zheng, Wenjin Deng, Qifeng Dai, Yuxin Lin, Wangzheng Shi, Ming Zeng","doi":"10.1016/j.gmod.2023.101207","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101207","url":null,"abstract":"<div><p>Reconstructing 3D human pose and body shape from monocular images or videos is a fundamental task for comprehending human dynamics. Frame-based methods can be broadly categorized into two fashions: those regressing parametric model parameters (e.g., SMPL) and those exploring alternative representations (e.g., volumetric shapes, 3D coordinates). Non-parametric representations have demonstrated superior performance due to their enhanced flexibility. However, when applied to video data, these non-parametric frame-based methods tend to generate inconsistent and unsmooth results. To this end, we present a novel approach that directly regresses the 3D coordinates of the mesh vertices and body joints with a spatial–temporal Transformer. In our method, we introduce a SpatioTemporal Learning Block (STLB) with Spatial Learning Module (SLM) and Temporal Learning Module (TLM), which leverages spatial and temporal information to model interactions at a finer granularity, specifically at the body token level. Our method outperforms previous state-of-the-art approaches on Human3.6M and 3DPW benchmark datasets.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101207"},"PeriodicalIF":1.7,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070323000371/pdfft?md5=a920877b3ee3210b23f7a6444d151f50&pid=1-s2.0-S1524070323000371-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92047096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-25DOI: 10.1016/j.gmod.2023.101206
D. Vijayalakshmi , Malaya Kumar Nath
Image enhancement is an indispensable pre-processing step for several image processing applications. Mainly, histogram equalization is one of the widespread techniques used by various researchers to improve the image quality by expanding the pixel values to fill the entire dynamic grayscale. It results in the visual artifact, structural information loss near edges due to the information loss (due to many-to-one mapping), and alteration in average luminance to a higher value. This paper proposes an enhancement algorithm based on structural information for homogeneous background images. The intensities are divided into two segments using the median value to preserve the average luminance. Unlike traditional techniques, this algorithm incorporates the spatial locations in the equalization process instead of the number of intensity values occurrences. The occurrences of each intensity concerning their spatial locations are combined using Rènyi entropy to enumerate a discrete function. An adaptive clipping limit is applied to the discrete function to control the enhancement rate. Then histogram equalization is performed on each segment separately, and the equalized segments are integrated to produce an enhanced image. The algorithm’s effectiveness is validated by evaluating the proposed method on CEED, CSIQ, LOL, and TID2013 databases. Experimental results reveal that the proposed method improves the contrast while preserving structural information, detail information, and average luminance. They are quantified by the high value of contrast improvement index, structural similarity index, and discrete entropy, and low value of average mean brightness error values of the proposed method when compared with the methods available in the literature, including deep learning architectures.
{"title":"A systematic approach for enhancement of homogeneous background images using structural information","authors":"D. Vijayalakshmi , Malaya Kumar Nath","doi":"10.1016/j.gmod.2023.101206","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101206","url":null,"abstract":"<div><p>Image enhancement is an indispensable pre-processing step for several image processing applications. Mainly, histogram equalization is one of the widespread techniques used by various researchers to improve the image quality by expanding the pixel values to fill the entire dynamic grayscale. It results in the visual artifact, structural information loss near edges due to the information loss (due to many-to-one mapping), and alteration in average luminance to a higher value. This paper proposes an enhancement algorithm based on structural information for homogeneous background images. The intensities are divided into two segments using the median value to preserve the average luminance. Unlike traditional techniques, this algorithm incorporates the spatial locations in the equalization process instead of the number of intensity values occurrences. The occurrences of each intensity concerning their spatial locations are combined using Rènyi entropy to enumerate a discrete function. An adaptive clipping limit is applied to the discrete function to control the enhancement rate. Then histogram equalization is performed on each segment separately, and the equalized segments are integrated to produce an enhanced image. The algorithm’s effectiveness is validated by evaluating the proposed method on CEED, CSIQ, LOL, and TID2013 databases. Experimental results reveal that the proposed method improves the contrast while preserving structural information, detail information, and average luminance. They are quantified by the high value of contrast improvement index, structural similarity index, and discrete entropy, and low value of average mean brightness error values of the proposed method when compared with the methods available in the literature, including deep learning architectures.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101206"},"PeriodicalIF":1.7,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S152407032300036X/pdfft?md5=66c749d2624c0d77acd46a4f2037626a&pid=1-s2.0-S152407032300036X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92047095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-18DOI: 10.1016/j.gmod.2023.101202
Hanggao Xin, Chenzhong Xiang, Wenyang Zhou, Dun Liang
Differentiable rendering has been proven as a powerful tool to bridge 2D images and 3D models. With the aid of differentiable rendering, tasks in computer vision and computer graphics could be solved more elegantly and accurately. To address challenges in the implementations of differentiable rendering methods, we present an efficient and modular differentiable rendering library named Jrender based on Jittor. Jrender supports surface rendering for 3D meshes and volume rendering for 3D volumes. Compared with previous differentiable renderers, Jrender exhibits a significant improvement in both performance and rendering quality. Due to the modular design, various rendering effects such as PBR materials shading, ambient occlusions, soft shadows, global illumination, and subsurface scattering could be easily supported in Jrender, which are not available in other differentiable rendering libraries. To validate our library, we integrate Jrender into applications such as 3D object reconstruction and NeRF, which show that our implementations could achieve the same quality with higher performance.
{"title":"Jrender: An efficient differentiable rendering library based on Jittor","authors":"Hanggao Xin, Chenzhong Xiang, Wenyang Zhou, Dun Liang","doi":"10.1016/j.gmod.2023.101202","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101202","url":null,"abstract":"<div><p>Differentiable rendering has been proven as a powerful tool to bridge 2D images and 3D models. With the aid of differentiable rendering, tasks in computer vision and computer graphics could be solved more elegantly and accurately. To address challenges in the implementations of differentiable rendering methods, we present an efficient and modular differentiable rendering library named Jrender based on Jittor. Jrender supports surface rendering for 3D meshes and volume rendering for 3D volumes. Compared with previous differentiable renderers, Jrender exhibits a significant improvement in both performance and rendering quality. Due to the modular design, various rendering effects such as PBR materials shading, ambient occlusions, soft shadows, global illumination, and subsurface scattering could be easily supported in Jrender, which are not available in other differentiable rendering libraries. To validate our library, we integrate Jrender into applications such as 3D object reconstruction and NeRF, which show that our implementations could achieve the same quality with higher performance.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101202"},"PeriodicalIF":1.7,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-07DOI: 10.1016/j.gmod.2023.101205
Hao Hua , Benjamin Dillenburger
Packing a designated set of shapes on a regular grid is an important class of operations research problems that has been intensively studied for more than six decades. Representing a -dimensional discrete grid as , we formalise the generalised regular grid (GRG) as a surjective function from to a geometric tessellation in a physical space, for example, the cube coordinates of a hexagonal grid or a quasilattice. This study employs 0-1 integer linear programming (ILP) to formulate the polyomino tiling problem with adjacency constraints. Rotation & reflection invariance in adjacency are considered. We separate the formal ILP from the topology & geometry of various grids, such as Ammann-Beenker tiling, Penrose tiling and periodic hypercube. Based on cutting-edge solvers, we reveal an intuitive correspondence between the integer program (a pattern of algebraic rules) and the computer codes. Models of packing problems in the GRG have wide applications in production system, facility layout planning, and architectural design. Two applications in planning high-rise residential apartments are illustrated.
{"title":"Packing problems on generalised regular grid: Levels of abstraction using integer linear programming","authors":"Hao Hua , Benjamin Dillenburger","doi":"10.1016/j.gmod.2023.101205","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101205","url":null,"abstract":"<div><p>Packing a designated set of shapes on a regular grid is an important class of operations research problems that has been intensively studied for more than six decades. Representing a <span><math><mi>d</mi></math></span>-dimensional discrete grid as <span><math><msup><mrow><mi>Z</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span>, we formalise the generalised regular grid (GRG) as a surjective function from <span><math><msup><mrow><mi>Z</mi></mrow><mrow><mi>d</mi></mrow></msup></math></span> to a geometric tessellation in a physical space, for example, the cube coordinates of a hexagonal grid or a quasilattice. This study employs 0-1 integer linear programming (ILP) to formulate the polyomino tiling problem with adjacency constraints. Rotation & reflection invariance in adjacency are considered. We separate the formal ILP from the topology & geometry of various grids, such as Ammann-Beenker tiling, Penrose tiling and periodic hypercube. Based on cutting-edge solvers, we reveal an intuitive correspondence between the integer program (a pattern of algebraic rules) and the computer codes. Models of packing problems in the GRG have wide applications in production system, facility layout planning, and architectural design. Two applications in planning high-rise residential apartments are illustrated.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101205"},"PeriodicalIF":1.7,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}