Leonardo Righetto, F. Bettio, F. Ponchio, Andrea Giachetti, E. Gobbetti
{"title":"基于web的多层框架中神经可轻化图像的有效交互可视化","authors":"Leonardo Righetto, F. Bettio, F. Ponchio, Andrea Giachetti, E. Gobbetti","doi":"10.2312/gch.20231158","DOIUrl":null,"url":null,"abstract":"Relightable images created from Multi-Light Image Collections (MLICs) are one of the most commonly employed models for interactive object exploration in cultural heritage. In recent years, neural representations have been shown to produce higher-quality images, at similar storage costs, with respect to the more classic analytical models such as Polynomial Texture Maps (PTM) or Hemispherical Harmonics (HSH). However, their integration in practical interactive tools has so far been limited due to the higher evaluation cost, making it difficult to employ them for interactive inspection of large images, and to the difficulty in integration cost, due to the need to incorporate deep-learning libraries in relightable renderers. In this paper, we illustrate how a state-of-the-art neural reflectance model can be directly evaluated, using common WebGL shader features, inside a multi-platform renderer. We then show how this solution can be embedded in a scalable framework capable to handle multi-layered relightable models in web settings. We finally show the performance and capabilities of the method on cultural heritage objects.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"94 25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effective Interactive Visualization of Neural Relightable Images in a Web-based Multi-layered Framework\",\"authors\":\"Leonardo Righetto, F. Bettio, F. Ponchio, Andrea Giachetti, E. Gobbetti\",\"doi\":\"10.2312/gch.20231158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Relightable images created from Multi-Light Image Collections (MLICs) are one of the most commonly employed models for interactive object exploration in cultural heritage. In recent years, neural representations have been shown to produce higher-quality images, at similar storage costs, with respect to the more classic analytical models such as Polynomial Texture Maps (PTM) or Hemispherical Harmonics (HSH). However, their integration in practical interactive tools has so far been limited due to the higher evaluation cost, making it difficult to employ them for interactive inspection of large images, and to the difficulty in integration cost, due to the need to incorporate deep-learning libraries in relightable renderers. In this paper, we illustrate how a state-of-the-art neural reflectance model can be directly evaluated, using common WebGL shader features, inside a multi-platform renderer. We then show how this solution can be embedded in a scalable framework capable to handle multi-layered relightable models in web settings. We finally show the performance and capabilities of the method on cultural heritage objects.\",\"PeriodicalId\":203827,\"journal\":{\"name\":\"Eurographics Workshop on Graphics and Cultural Heritage\",\"volume\":\"94 25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Eurographics Workshop on Graphics and Cultural Heritage\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2312/gch.20231158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eurographics Workshop on Graphics and Cultural Heritage","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2312/gch.20231158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Effective Interactive Visualization of Neural Relightable Images in a Web-based Multi-layered Framework
Relightable images created from Multi-Light Image Collections (MLICs) are one of the most commonly employed models for interactive object exploration in cultural heritage. In recent years, neural representations have been shown to produce higher-quality images, at similar storage costs, with respect to the more classic analytical models such as Polynomial Texture Maps (PTM) or Hemispherical Harmonics (HSH). However, their integration in practical interactive tools has so far been limited due to the higher evaluation cost, making it difficult to employ them for interactive inspection of large images, and to the difficulty in integration cost, due to the need to incorporate deep-learning libraries in relightable renderers. In this paper, we illustrate how a state-of-the-art neural reflectance model can be directly evaluated, using common WebGL shader features, inside a multi-platform renderer. We then show how this solution can be embedded in a scalable framework capable to handle multi-layered relightable models in web settings. We finally show the performance and capabilities of the method on cultural heritage objects.