Bidirectional path tracing (BDPT) can be accelerated by selecting appropri- ate light sub-paths for connection. However, existing algorithms need to perform frequent distribution reconstruction and have expensive overhead. We present a novel approach, SPCBPT, for probabilistic connections that constructs the light selection distribution in sub-path space. Our approach bins the sub-paths into multiple subspaces and keeps the sub-paths in the same subspace of low discrepancy, wherein the light sub-paths can be selected by a subspace-based two-stage sampling method, i.e., sampling the light subspace and then resampling the light sub-paths within this subspace. The subspace-based distribution is free of reconstruction and provides efficient light selection at a very low cost. We also propose a method that considers the Multiple Importance Sampling (MIS) term in the light selection and thus obtain an MIS-aware distribution that can minimize the upper bound of variance of the combined estimator. Prior methods typically omit this MIS weights term. We evaluate our algorithm using various benchmarks, and the results show that our approach has superior performance and can significantly reduce the noise compared with the state-of-the-art method.
{"title":"SPCBPT: subspace-based probabilistic connections for bidirectional path tracing","authors":"Fujia Su, Sheng Li, Guoping Wang","doi":"10.1145/3528223.3530183","DOIUrl":"https://doi.org/10.1145/3528223.3530183","url":null,"abstract":"Bidirectional path tracing (BDPT) can be accelerated by selecting appropri- ate light sub-paths for connection. However, existing algorithms need to perform frequent distribution reconstruction and have expensive overhead. We present a novel approach, SPCBPT, for probabilistic connections that constructs the light selection distribution in sub-path space. Our approach bins the sub-paths into multiple subspaces and keeps the sub-paths in the same subspace of low discrepancy, wherein the light sub-paths can be selected by a subspace-based two-stage sampling method, i.e., sampling the light subspace and then resampling the light sub-paths within this subspace. The subspace-based distribution is free of reconstruction and provides efficient light selection at a very low cost. We also propose a method that considers the Multiple Importance Sampling (MIS) term in the light selection and thus obtain an MIS-aware distribution that can minimize the upper bound of variance of the combined estimator. Prior methods typically omit this MIS weights term. We evaluate our algorithm using various benchmarks, and the results show that our approach has superior performance and can significantly reduce the noise compared with the state-of-the-art method.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"44 1","pages":"77:1-77:14"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78502438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TopoCut: fast and robust planar cutting of arbitrary domains","authors":"Xianzhong Fang, H. Bao, Jin Huang","doi":"10.1145/3528223.3530149","DOIUrl":"https://doi.org/10.1145/3528223.3530149","url":null,"abstract":",","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"164 1","pages":"40:1-40:15"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76380015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dave Pagurek van Mossel, Chenxi Liu, Nicholas Vining, Mikhail Bessmeltsev, A. Sheffer
{"title":"StrokeStrip: joint parameterization and fitting of stroke clusters","authors":"Dave Pagurek van Mossel, Chenxi Liu, Nicholas Vining, Mikhail Bessmeltsev, A. Sheffer","doi":"10.1145/3450626.3459777","DOIUrl":"https://doi.org/10.1145/3450626.3459777","url":null,"abstract":"","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"10 2","pages":"50:1-50:18"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72608280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rundong Wu, Joy Xiaoji Zhang, Jonathan Leaf, Xinru Hua, Ante Qu, Claire Harvey, Emily Holtzman, Joy Ko, B. Hagan, Doug L. James, François Guimbretière, Steve Marschner
3D weaving is an emerging technology for manufacturing multilayer woven textiles. In this work, we present Weavecraft: an interactive, simulation-based design tool for 3D weaving. Unlike existing textile software that uses 2D representations for design patterns
{"title":"Weavecraft: an interactive design and simulation tool for 3D weaving","authors":"Rundong Wu, Joy Xiaoji Zhang, Jonathan Leaf, Xinru Hua, Ante Qu, Claire Harvey, Emily Holtzman, Joy Ko, B. Hagan, Doug L. James, François Guimbretière, Steve Marschner","doi":"10.1145/3414685.3417865","DOIUrl":"https://doi.org/10.1145/3414685.3417865","url":null,"abstract":"3D weaving is an emerging technology for manufacturing multilayer woven textiles. In this work, we present Weavecraft: an interactive, simulation-based design tool for 3D weaving. Unlike existing textile software that uses 2D representations for design patterns","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"57 1","pages":"210:1-210:16"},"PeriodicalIF":0.0,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87916865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fig. 1. Our Monolith solver enables efficient and robust two-way simultaneous rigid-rigid and rigid-fluid coupling. (Left) Two hollow glass spheres containing inviscid liquid roll around within a basin as the liquid slides and splashes. (Middle) A boat carrying multiple loads is perturbed by ocean waves. (Right) When the glass spheres instead contain viscous liquid, the no-slip boundary condition, viscosity, and friction together bring the spheres more quickly to rest.
{"title":"Monolith: a monolithic pressure-viscosity-contact solver for strong two-way rigid-rigid rigid-fluid coupling","authors":"Tetsuya Takahashi, Christopher Batty","doi":"10.1145/3414685.3417798","DOIUrl":"https://doi.org/10.1145/3414685.3417798","url":null,"abstract":"Fig. 1. Our Monolith solver enables efficient and robust two-way simultaneous rigid-rigid and rigid-fluid coupling. (Left) Two hollow glass spheres containing inviscid liquid roll around within a basin as the liquid slides and splashes. (Middle) A boat carrying multiple loads is perturbed by ocean waves. (Right) When the glass spheres instead contain viscous liquid, the no-slip boundary condition, viscosity, and friction together bring the spheres more quickly to rest.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"3 1","pages":"182:1-182:16"},"PeriodicalIF":0.0,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74152012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Torsten Hädrich, Milosz Makowski, Wojciech Palubicki, D. Banuti, S. Pirk, D. Michels
The complex interplay of a number of physical and meteorological phenomena makes simulating clouds a challenging and open research problem. We explore a physically accurate model for simulating clouds and the dynamics of their transitions. We propose first-principle formulations for computing buoyancy and air pressure that allow us to simulate the variations of atmospheric density and varying temperature gradients. Our simulation allows us to model various cloud types, such as cumulus, stratus, and stratoscumulus, and their realistic formations caused by changes in the atmosphere. Moreover, we are able to simulate large-scale cloud super cells – clusters of cumulonimbus formations – that are commonly present during thunderstorms. To enable the efficient exploration of these stormscapes, we propose a lightweight set of high-level parameters that allow us to intuitively explore cloud formations and dynamics. Our method allows us to simulate cloud formations of up to about 20 km × 20 km extents at interactive rates. We explore the capabilities of physically accurate and yet interactive cloud simulations by showing numerous examples and by coupling our model with atmosphere measurements of real-time weather services to simulate cloud formations in the now. Finally, we quantitatively assess our model with cloud fraction profiles, a common measure for comparing cloud types.
{"title":"Stormscapes: simulating cloud dynamics in the now","authors":"Torsten Hädrich, Milosz Makowski, Wojciech Palubicki, D. Banuti, S. Pirk, D. Michels","doi":"10.1145/3414685.3417801","DOIUrl":"https://doi.org/10.1145/3414685.3417801","url":null,"abstract":"The complex interplay of a number of physical and meteorological phenomena makes simulating clouds a challenging and open research problem. We explore a physically accurate model for simulating clouds and the dynamics of their transitions. We propose first-principle formulations for computing buoyancy and air pressure that allow us to simulate the variations of atmospheric density and varying temperature gradients. Our simulation allows us to model various cloud types, such as cumulus, stratus, and stratoscumulus, and their realistic formations caused by changes in the atmosphere. Moreover, we are able to simulate large-scale cloud super cells – clusters of cumulonimbus formations – that are commonly present during thunderstorms. To enable the efficient exploration of these stormscapes, we propose a lightweight set of high-level parameters that allow us to intuitively explore cloud formations and dynamics. Our method allows us to simulate cloud formations of up to about 20 km × 20 km extents at interactive rates. We explore the capabilities of physically accurate and yet interactive cloud simulations by showing numerous examples and by coupling our model with atmosphere measurements of real-time weather services to simulate cloud formations in the now. Finally, we quantitatively assess our model with cloud fraction profiles, a common measure for comparing cloud types.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"3 1","pages":"175:1-175:16"},"PeriodicalIF":0.0,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75029438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liang Shi, Beichen Li, Miloš Hašan, Kalyan Sunkavalli, T. Boubekeur, R. Mech, W. Matusik
that maps node graph parameters to rendered images. This facilitates the use of gradient-based optimization to estimate the parameters such that the resulting material, when rendered, matches the target image appearance, as quantified by a style transfer loss. In addition, we propose a deep neural feature-based graph selection and parameter initialization method that efficiently scales to a large number of procedural graphs. We evaluate our method on both rendered synthetic materials and real materials captured as flash photographs. We demonstrate that MATch can reconstruct more accurate, general, and complex procedural materials compared to the state-of-the-art. Moreover, by producing a procedural output, we unlock capabilities such as constructing arbitrary-resolution material maps and parametrically editing the material appearance.
{"title":"Match: differentiable material graphs for procedural material capture","authors":"Liang Shi, Beichen Li, Miloš Hašan, Kalyan Sunkavalli, T. Boubekeur, R. Mech, W. Matusik","doi":"10.1145/3414685.3417781","DOIUrl":"https://doi.org/10.1145/3414685.3417781","url":null,"abstract":"that maps node graph parameters to rendered images. This facilitates the use of gradient-based optimization to estimate the parameters such that the resulting material, when rendered, matches the target image appearance, as quantified by a style transfer loss. In addition, we propose a deep neural feature-based graph selection and parameter initialization method that efficiently scales to a large number of procedural graphs. We evaluate our method on both rendered synthetic materials and real materials captured as flash photographs. We demonstrate that MATch can reconstruct more accurate, general, and complex procedural materials compared to the state-of-the-art. Moreover, by producing a procedural output, we unlock capabilities such as constructing arbitrary-resolution material maps and parametrically editing the material appearance.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"18 1","pages":"196:1-196:15"},"PeriodicalIF":0.0,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83768148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Mahdavi-Amiri, Fenggen Yu, Haisen Zhao, Adriana Schulz, Hao Zhang
Fig. 1. Carvable volume decomposition computed by our algorithm for the high-genus Fertility model, with 6 carving directions (indicated by the yellow arrows) and a total of 10 carvable volumes (one carving direction may yield multiple volumes, e.g., 3 volumes for the second direction). Three insets show physical outputs produced by CNC rough machining. Each carvable volume is continuously carved following a connected Fermat spiral toolpath.
{"title":"VDAC: volume decompose-and-carve for subtractive manufacturing","authors":"Ali Mahdavi-Amiri, Fenggen Yu, Haisen Zhao, Adriana Schulz, Hao Zhang","doi":"10.1145/3414685.3417772","DOIUrl":"https://doi.org/10.1145/3414685.3417772","url":null,"abstract":"Fig. 1. Carvable volume decomposition computed by our algorithm for the high-genus Fertility model, with 6 carving directions (indicated by the yellow arrows) and a total of 10 carvable volumes (one carving direction may yield multiple volumes, e.g., 3 volumes for the second direction). Three insets show physical outputs produced by CNC rough machining. Each carvable volume is continuously carved following a connected Fermat spiral toolpath.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"60 1","pages":"203:1-203:15"},"PeriodicalIF":0.0,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73664036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Bhunia, Ayan Das, U. Muhammad, Yongxin Yang, Timothy M. Hospedales, T. Xiang, Yulia Gryaditskaya, Yi-Zhe Song
We present the first competitive drawing agent Pixelor that exhibits human-level performance at a Pictionary-like sketching game, where the participant whose sketch is recognized first is a winner. Our AI agent can autonomously sketch a given visual concept, and achieve a recognizable rendition as quickly or faster than a human competitor. The key to victory for the agent’s goal is to learn the optimal stroke sequencing strategies that generate the most recognizable and distinguishable strokes first. Training Pixelor is done in two steps. First, we infer the stroke order that maximizes early recognizability of human training sketches. Second, this order is used to supervise the training of a sequence-to-sequence stroke generator. Our key technical contributions are a tractable search of the exponential space of orderings using neural sorting; and an improved Seq2Seq Wasserstein (S2S-WAE) generator that uses an optimal-transport loss to accommodate the multi-modal nature of the optimal stroke distribution. Our analysis shows that Pixelor is better than the human players of the Quick, Draw! game, under both AI and human judging of early recognition. To analyze the impact of human competitors’ strategies, we conducted a further human study with participants being given unlimited thinking time and training in early recognizability by feedback from an AI judge. The study shows that humans do gradually improve their strategies with training, but overall Pixelor still matches human performance. The code and the dataset are available at http://sketchx.ai/pixelor.
{"title":"Pixelor: a competitive sketching AI agent. so you think you can sketch?","authors":"A. Bhunia, Ayan Das, U. Muhammad, Yongxin Yang, Timothy M. Hospedales, T. Xiang, Yulia Gryaditskaya, Yi-Zhe Song","doi":"10.1145/3414685.3417840","DOIUrl":"https://doi.org/10.1145/3414685.3417840","url":null,"abstract":"We present the first competitive drawing agent Pixelor that exhibits human-level \u0000performance at a Pictionary-like sketching game, where the participant whose sketch is recognized first is a winner. Our AI agent can autonomously \u0000sketch a given visual concept, and achieve a recognizable rendition as quickly \u0000or faster than a human competitor. The key to victory for the agent’s goal \u0000is to learn the optimal stroke sequencing strategies that generate the most \u0000recognizable and distinguishable strokes first. Training Pixelor is done in two \u0000steps. First, we infer the stroke order that maximizes early recognizability of \u0000human training sketches. Second, this order is used to supervise the training \u0000of a sequence-to-sequence stroke generator. Our key technical contributions \u0000are a tractable search of the exponential space of orderings using neural \u0000sorting; and an improved Seq2Seq Wasserstein (S2S-WAE) generator that \u0000uses an optimal-transport loss to accommodate the multi-modal nature of the \u0000optimal stroke distribution. Our analysis shows that Pixelor is better than the \u0000human players of the Quick, Draw! game, under both AI and human judging \u0000of early recognition. To analyze the impact of human competitors’ strategies, \u0000we conducted a further human study with participants being given unlimited \u0000thinking time and training in early recognizability by feedback from an AI \u0000judge. The study shows that humans do gradually improve their strategies \u0000with training, but overall Pixelor still matches human performance. The code \u0000and the dataset are available at http://sketchx.ai/pixelor.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"5 1","pages":"166:1-166:15"},"PeriodicalIF":0.0,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87622497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marco Livesu, N. Pietroni, E. Puppo, A. Sheffer, Paolo Cignoni
Fig. 1. Given a surface mesh and a curvature and feature aligned cross-field (a) LoopyCuts generates a sequence of field-aware cutting loops (b), and uses these loops to generate solid cuts through the object (c), decomposing the model into a metamesh consisting of hex (green), prism (blue) and other (orange) simple blocks (d). It converts the metamesh into a hex-mesh via midpoint refinement. The output hex-mesh (e,f) is well-shaped and well-aligned with the input field.
{"title":"LoopyCuts: practical feature-preserving block decomposition for strongly hex-dominant meshing","authors":"Marco Livesu, N. Pietroni, E. Puppo, A. Sheffer, Paolo Cignoni","doi":"10.1145/3386569.3392472","DOIUrl":"https://doi.org/10.1145/3386569.3392472","url":null,"abstract":"Fig. 1. Given a surface mesh and a curvature and feature aligned cross-field (a) LoopyCuts generates a sequence of field-aware cutting loops (b), and uses these loops to generate solid cuts through the object (c), decomposing the model into a metamesh consisting of hex (green), prism (blue) and other (orange) simple blocks (d). It converts the metamesh into a hex-mesh via midpoint refinement. The output hex-mesh (e,f) is well-shaped and well-aligned with the input field.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"1 1","pages":"121"},"PeriodicalIF":0.0,"publicationDate":"2020-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83147424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}