{"title":"StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization","authors":"Kaiyuan Tang, Chaoli Wang","doi":"arxiv-2408.00150","DOIUrl":null,"url":null,"abstract":"In volume visualization, visualization synthesis has attracted much attention\ndue to its ability to generate novel visualizations without following the\nconventional rendering pipeline. However, existing solutions based on\ngenerative adversarial networks often require many training images and take\nsignificant training time. Still, issues such as low quality, consistency, and\nflexibility persist. This paper introduces StyleRF-VolVis, an innovative style\ntransfer framework for expressive volume visualization (VolVis) via neural\nradiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its\nability to accurately separate the underlying scene geometry (i.e., content)\nand color appearance (i.e., style), conveniently modify color, opacity, and\nlighting of the original rendering while maintaining visual content consistency\nacross the views, and effectively transfer arbitrary styles from reference\nimages to the reconstructed 3D scene. To achieve these, we design a base NeRF\nmodel for scene geometry extraction, a palette color network to classify\nregions of the radiance field for photorealistic editing, and an unrestricted\ncolor network to lift the color palette constraint via knowledge distillation\nfor non-photorealistic editing. We demonstrate the superior quality,\nconsistency, and flexibility of StyleRF-VolVis by experimenting with various\nvolume rendering scenes and reference images and comparing StyleRF-VolVis\nagainst other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF\nand SNeRF) style rendering solutions.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"75 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.00150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In volume visualization, visualization synthesis has attracted much attention
due to its ability to generate novel visualizations without following the
conventional rendering pipeline. However, existing solutions based on
generative adversarial networks often require many training images and take
significant training time. Still, issues such as low quality, consistency, and
flexibility persist. This paper introduces StyleRF-VolVis, an innovative style
transfer framework for expressive volume visualization (VolVis) via neural
radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its
ability to accurately separate the underlying scene geometry (i.e., content)
and color appearance (i.e., style), conveniently modify color, opacity, and
lighting of the original rendering while maintaining visual content consistency
across the views, and effectively transfer arbitrary styles from reference
images to the reconstructed 3D scene. To achieve these, we design a base NeRF
model for scene geometry extraction, a palette color network to classify
regions of the radiance field for photorealistic editing, and an unrestricted
color network to lift the color palette constraint via knowledge distillation
for non-photorealistic editing. We demonstrate the superior quality,
consistency, and flexibility of StyleRF-VolVis by experimenting with various
volume rendering scenes and reference images and comparing StyleRF-VolVis
against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF
and SNeRF) style rendering solutions.